 conversation today. So you can give your thoughts on developers, or network engineers, or standards people, or whatever your mind is. They would be interested in a topic like this. Beyond that, I'll give a few instructions that for about the first 10 to 15 minutes, we'll just do a discussion. And then after that, if people have questions, they can start asking on the thing. And then as usual, I'll hand it over to you for the start of the session. So we are already live. And questions will be asked on YouTube as well, I guess, and any other streaming place as well. Perfect. So Satya, how are things at your end since the last session? Things are good. I mean, 2020 has been a weird year, right? We are mostly sitting at home. So I think it's been an interesting year. Lots of time with the family, so it's been good. Yeah, so work has been more or less at this point of time. I think work doesn't change. At least for me, it hasn't fundamentally changed. OK, so from the things that I know about what you do, it should increase significantly because consumption has increased in the digital space. That's true. But I think the benefit is I don't have three hours of commute every day. So I can repurpose that time very well. Yeah, that's great. And have you been able to travel? Are you working from home all the time, or are you going out? So I think for work, I don't have to step out of the house. We are in work from home until, I think, next June or July. So that's been officially called. So H1 is going to be pretty much remote work for almost the entire company. And I mean, there will be a few exceptions. There are some critical resources who need to travel, especially for our company. There are server deployments and infrastructure folks who need to travel. But outside of that, the majority of the workforce is working remotely. And on a personal front, I think I've now started stepping out a little bit, keeping all the precautions and social distancing in mind. But I think I'll be more comfortable once the vaccine hits the market. I think we'll all be more comfortable when that happens. I managed to pull in two short travels by road to just go to a place which is about between 200 to 300 kilometers away from the city and stay there for three days and then come back just to have a rebalancing of your mental health, basically. So I've managed to do that a couple of times, but feels too less. Yeah, it does, it does. I think a drive is the best thing we can do right now. For those who can possibly have the privilege to do that, I don't know how everyone else is managing. Anyhow, so it's 11 or three. So let's start with the introductions and all and get the conversation starting. Hi everyone, whoever has joined us from YouTube and other live streams, as well as on Zoom, if in case anyone is going to join us on Zoom in a bit. I'm Sawvik. I run Mirange, which is a strategic web design and development studio in New Delhi. And I, along with Hasgeek and other volunteers as well, have been doing a series of these freewheeling chats called Content Web. So Content Web is a place or a space where we discuss many things around content-based websites which are publishing heavy, typically CMS-driven, could be a marketing site, could be a media site, e-commerce site, et cetera. At Content Web, we want to cover topics from three different practices. The three practices that come together in building such websites that can be on the content front, could be content strategies, copywriters and others, could be designers or developers, both front-end as well as back-end. And today we are going to talk something which is a little developer oriented in our conversation. There's also a fourth pillar in this which is business owners or individuals, businesses or publishers, people who own the website. They might also want to discuss or talk about business side of things, analytics and other things which are conversations we want to bring into Content Web. So if you have any suggestions for topics that we should pick up at Content Web, please go to hasgeek.com slash Content Web and put in your suggestions or proposals so that we can take up those subject areas under the umbrella of Content Web in our future sessions. But today we have Satya who has at least twice previously come on this platform and shared many fun things about CDNs and performance and other stuff. And we have called him again. But for those who have not met Satya before, Satya, would you like to introduce yourself? Sure, thanks, Ovik. Hi, I'm Satya Prasad. And I'm a tech evangelist and a consultant at Akamai. I talk about web performance, streaming media and internet security as part of today's activities. Oh, excellent. What would you be speaking about among all those things that you just stated today, Satya? So, Ovik, today we'll talk a little bit about the HTTP, how it's evolved and what are the new buzzwords we are carrying in the industry today? All right. And who is it likely to benefit the most this conversation we'll have? I think it benefits a wide range of people. If you're building an app, you're a site owner, you run any part of the technology stack for your company. Or if you're just an enthusiast and you wanna know what's happening in the internet space, this is a good session for you to get an introduction on what's happening on the protocol side of things where all the big tech giants are going and where the industry is shifting towards. All right, thanks. I'm personally really excited about this because last time when I was discussing protocols, I guess it was in my college days when we were actually studying really mugging up protocols and writing exam papers on that. From there to now, clearly lots of things have changed. And just before starting this session, I had a quick chat with Satya about the fact that we're talking about new protocols that have come in. TCP used to be something that I used to study during my college days. And now there are HTTP3 as an example we as we hear more about is based on new things and new underlying technologies. Now a few instructions before we get into the session, initially for the first 10 to 15 minutes, I guess, Satya will be just walking through a certain set of slide decks that he has created, slides that he has created. Once that is done, we'll open the space for Q&A or comments. So if you're on YouTube, you can post your questions there. If you come on Zoom, you get a chance to ask your question directly to Satya as well. And there's a comment feature on Zoom wherein you can put in your questions as the session progresses. If you put your questions on YouTube, someone will forward it to me. So you might have to wait a little bit before the question gets taken, but I'll surely take up the questions that get posted on YouTube and all. And yeah, that's all. So let's get started with today's conversation. So Satya, over to you. Let's talk about HTTP. Sure, thanks, Ovik. So before I get started, I just wanted to set some context on what HTTP is all about. HTTP is one of those underlying protocols that pretty much makes all of internet and the web work, which I think a lot of us take for granted. For example, when you make a request on your web browser, the underlying mechanism of the semantics using which the browser communicates with the server is HTTP. And when it's encrypted, it's over TLS and that's an extension. So before I get, I mean, quickly jump into it, right? Like, how was the HTTP protocol itself? How was it evolved over the years? We had HTTP 0.9, which was the first version of HTTP that came out. And it was around for a really long time, close to about, that's about 25 years, where the first version of, there are three iterations of it, but 0.9, 1.0 and 1.1 together were around for that really, really long time. And especially HTTP 1.1, right? We see that it's stuck around for nearly two decades and it's been one of the fundamental protocols that is used over the internet. It's very flexible. And like I mentioned, it's used for basically information transfer from a client to the server. And what's happening now is we started taking a step back. We wanna look at how the internet fundamentally works, see if there are improvements that are possible on HTTP and sort of make those improvements. So quickly, what HTTP 1.1 and 1.1 was all about? Or it's still used widely, so what is it all about? HTTP 1, I'm gonna use HTTP 1 as an umbrella term for HTTP 1 and 1.1. One is not really used. Most of what we use today is 1.1. So HTTP 1 is, it's a text-based version of a text-based protocol where for the most part, things are human-readable. It's wide-space delimited. So if you want to parse multiple fields or multiple arguments, it's just separated by a wide-space. And both the server and the client must sort of parse what's sent and sort of delimit the wide-space to figure out what's going on. The interesting thing about HTTP 1 is it uses TCP as the underlying transport protocol. And on HTTP 1.1, you can make a single request on a single connection and that's pretty much it. And like I said, this was the de facto standard of the web for nearly two decades. So I think around 2015 is when things started to change. So one of the questions to ask is why did things have to change in the first place? So if you recall the last session we did, the session on why we need CDNs in the first place or even the session on performance. One of the things that we went over is the internet or the web has fundamentally changed. Things have become more resource-intensive and users also expect a certain level of performance from their application or websites that they visit. So the big need for change on the HTTP side was the need to improve performance. So that brings us to the question, what exactly is the problem with HTTP 1.1 in the first place? So like I said, the biggest problem was on a single PCP connection, you could make a request for one HTTP request, one HTTP resource at any point in time. So if there were 10 images on your site, if you're using a single connection, you would have to wait until the first images was fully downloaded to the client, then your browser would make the second request and so on. Now browsers, what they did was in order to sort of work around this problem, a lot of browsers set an optimization in place. They said, okay, fine, if you're opening a page, the browser will go ahead and open multiple simultaneous connection to a single server. And again, if you think about it, that can quickly run into a nightmare scenario where they're opening hundreds of connections to the server. So what all browsers did was, depending on which browser you were using, they all had limitations in the number of maximum connections you could open to a given server or a domain. And in most cases, this was limited, the upper limit was six. So you could at any point in time, download six simultaneous connections, but these were on separate TCP connections. What that means is there's an overhead for establishing that TCP connection and getting the content through those separate connections. Now, the key thing to realize over here is setting up that TCP connection as an overhead. There's no additional benefit because the bandwidth or the throughput between the server and client remains constant. Okay, so that was pretty much HTTP 1.1. And with these challenges in place, they had to think of some enhancements to HTTP 1.1, and they came up with HTTP 2. Now, HTTP 2 was fundamentally based on something called the speedy protocol. This was built by Google. And essentially what it did was on the HTTP layer, it introduced binary framing and multiplexing. What that meant was that the speedy protocol would convert all of the text-based information exchange that you had into binary format and you would be able to request for multiple resources over a single connection. Now, it's definitely an improvement over HTTP 1.1, but I think what HTTP 2 did, and in some ways wisely so, was it continued to use TCP as the underlying transport mechanism. That way, it's fairly, you know that support is very widely available. And at least when HTTP 2 started off, there was a proposal to support both HTTP and HTTPS, that is both secure and non-secure connections, but the non-secure support was not really adopted. There were a bunch of things, there were a bunch of other capabilities that came with HTTP 2, but I think the big one is binary framing and multiplexing that added a lot of benefit. Now, I'll talk about some of the other stuff that were added onto these protocols, but I'll just stick to this sort of thing, this through and walk you through what's fundamentally changed in HTTP 3. We'll come back and recap on some of the other capabilities that were added on top of these protocols. So, what HTTP 2 again did was, you know, like I said earlier, it added the multiplexing layer on HTTP. But if you think about it, the problem that HTTP 2 was trying to solve was HOL or head of line blocking, where you're essentially trying to ensure that a single request, if it's not delivered to an end user, does not block the entire request response cycle. So, what happens in HTTP 1 is, like I said, if you have 10 images and the first image for whatever reason does not get delivered to the client, it means the second resource cannot even be requested. It's sequential. So, you have to wait for one to get done before the second is even requested. And that is HTTP 2, what we said was, okay, we are going to make requests for multiple HTTP resources or resources over the same TCP connection, and that's going to solve a lot of problems. But what really happened with HTTP 2 was you still had head of line blocking, except you moved the problem to your transport layer. Underlying technology for HTTP 2 is still TCP, and you still have TCP HOL blocking in HTTP 2. What that means is if in HTTP 2, there's a single frame or a packet that's dropped, you are pretty much limited by that packet. That packet has to get retransmitted and your TCP session would start from that point on. And the underlying reason for this is it's based on TCP. TCP has been, again, one of the pillars for the internet today. And the benefit of this particular transport protocol is very reliable. But there are a few downsides. It starts off slow, and that's called the TCP slow start. It's very well understood today. And every time a packet gets dropped over TCP, the server actually retransmits that packet. And the client has to acknowledge the receipt of that particular packet before the server starts sending more content. So ways to work around this. And when the folks who are working on the protocol and standards are thinking about it, it's actually a straightforward problem. We already have an existing transport protocol, the UDP, that solves for this exact problem. It does not get blocked by a single drop packet. So that's essentially what HTTP 3 is all about. HTTP 3 is building on top of HTTP 2. It's not a significant shift, but it changes, it brings an incremental change. It uses Qwik for the transport protocol. And essentially Qwik is UDP and TLS together, right? It's that sweet spot where you're able to get some of the benefits of the encryption and security from TLS, and wrap it all under the UDP scheme so that you have a new transport protocol, which is Qwik. And they had to come up with a new protocol because some of the implementation details differ from UDP. And HTTP 3, just again sort of incrementally extends the concept of HTTP 2, still has multiplexing of requests, but the streams are now not implemented at the HTTP layer, but they're at the transport layer. That is, it's at the Qwik layer. And fundamentally, I think Qwik supports encryption ground up, which means at the transport layer, encryption is applied, which again has ramifications for the entire stack. So if you were to take a step back and look at all of these three major protocol, HTTP evolutions, right? What you'll notice is HTTP 1.1 and 2, fundamentally use TCP as the transport layer protocol. HTTP 3, the proposal is to use Qwik. Again, there are a little bit of changes in the way the content is exchanged. HTTP 1 was primarily text, and HTTP 2 and 3 use the binary format, again for efficiencies. One of the big drawbacks for HTTP 1 was you couldn't compress the content. As in if you're transmitting a JSON body, you will be able to compress that JSON using GZ, but the headers were not compressed. And that was a big problem that was addressed in HTTP 2. So to dig a little deeper into that, if you had cookies and we know that sites use a lot of cookies these days, all of those cookies are in plain text and uncompressed. So if you have a cookie of a really long cookie, one kilobyte long, it is not compressed. And that sort of adds quickly to the payload, especially if you're making API calls where your response body itself is fairly small, and it's compressed. So that was important. Again, HTTP 3 sort of takes it one step further. And it has QPAC for header compression, and that's got a little to do with how XPAC worked. And again, the differences are primarily due to the nuances of the protocol. HTTP 2, because it used TCP, it expected in-order delivery of packets and segments. QPAC sort of moves away from that, and because you are using UDP, you're not guaranteed in-order delivery, so you had to come up with something new. HTTP schemes we spoke about. And finally, there are some other things that, sort of skipped when I was talking about HTTP 2. So HTTP 2 added some other capabilities over HTTP 1. And these were primarily server-side push or server-side events, which we'll talk a little bit about in detail. And the other one was prioritization. So prioritization, again, is one of those nuanced topics. Basically, when a browser is requesting for all of the content on your site, it knows or at least it's well-established what are the critical render-blocking resources, and it makes sense for the clients to sort of make those requests first, or at least prioritize or allocate more bandwidth to the critical resources. But what's happened over the years or what's happened today is the different browser vendors, they cannot agree on how prioritization should be implemented, and you'll see different prioritization approaches between Firefox, Chrome, Safari, and so on. So that prioritization matrix has gotten very complex, and sometimes it can be a nightmare to debug, especially if you're a developer. You'll really need to find out what's going on, and documentation is not forthcoming in that space. The other thing, so again, HTTP2 added prioritization, HTTP3 wants to either do away with prioritization completely or it's still looking at the different proposals to drastically simplify the prioritization and not have such a complex scenario. One of the things that's happening in HTTP2 is while the clients can set priority for individual requests, the RFC does not really mandate that the server must honor those prioritizations or the priority. So again, you have different clients setting different kinds of priorities, but the server can choose to ignore them or honor them. So it leads to some complexities, and that hopefully should be addressed in HTTP3. The other thing that sort of comes to mind, and this was sort of teething issues with HTTP2, we are likely to see this in HTTP3 is when it's in its infancy, which HTTP3 is in right now, how do you decide whether a particular request must be made over HTTP1, HTTP2, and HTTP3? Not every server will support HTTP3, so how and when do you decide? And that's pretty much how the handshake happens for that particular protocol. Today, HTTP1.1 is default, almost all connections that are made over HTTP is on HTTP1.1, no questions asked. Once you have the handshake for SSL or once the handshake over TLS is complete, for HTTP2, you're using ALPN to negotiate the HTTP2 connection. And similarly for HTTP3, at least the proposal is you'll use an alt-SEV in the response header after the certificate negotiation to say that, okay, fine, HTTP3 is supported by this server. There are a few proposals on how this could be handled. There are some proposals on whether the protocol support should be advertised in something like a DNS record, but we'll see how that really pans out. In terms of why you should care, I think this one is highly debated. So what I've done is just pulled in some references that were easily available and put some numbers as a comparison matrix. So there are some public reports available that HTTP2 can be roughly 30% faster than HTTP1.1. Now, a call out over there, if you've done a lot of things right in HTTP1.1, you're likely to see fewer benefits. Similarly, HTTP3 is likely to bring about 10% benefit over HTTP2. So that's where we are. I would say take that last benefits with a huge pinch of salt. It really depends on what your implementation of HTTP2 is, what you're using HTTP2 for, and similarly, which implementation of HTTP3 you're using, what it means for your infrastructure and whether that performance trade-off is really worth it. It really depends on what articles and who you're listening to on the internet to understand that a little better. And there are some other stuff that I didn't cover, but before I get into all of this other stuff, I just want to take a pause, Sawik or if anybody else has a question, I just want to sort of tackle some of this first and then talk about some of this other stuff that I have. Yeah, I do have some questions and maybe we can get into the other stuff in like about 10 minutes time because there are so many things that are going on through my mind. Firstly, thanks a lot for giving this quick overview of what is coming in for HTTP3. Now, I have some questions that are too specific, but let me start with a few generic questions first. Where are we in the journey of having a production-ready protocol with respect to HTTP3? Because from what I heard in the last, and if you can go back to the last slide, that could be a very key slide where I have a few questions. Because you said that on the prioritization algorithm or what should the mechanism be, browsers have not been able to agree on that front and all. So is it a stable protocol at this point of time? Or is it still in flux? And by when can we expect, if it's still in flux, by when can we expect it to be production-ready and like might be deployed? So I'm assuming this is a question for HTTP3 and not HTTP2, right? HTTP3, yeah. Okay, so, okay, to set some context, right? In, I think there are multiple reports and I should have put this with sources in one of my slides, but today the internet adoption for HTTP2 is roughly around 45 to 50%. The rest of it is pretty much HTTP 1.1. HTTP3 is actually used in niche pockets. There's Facebook and Google who are using their own implementations of HTTP3. Google has GQuick that is the Google version of Quick, which they built in-house and they've been using it for YouTube and some of the other Google applications. Similarly, Facebook has been experimenting with HTTP3, but in terms of, and these are primarily on the apps, right? So these are native apps that I'm talking about. In terms of browser support, HTTP3 is not, if I'm, I could be wrong here, but I don't think it's enabled on any browser right now by default. Yeah, I think you have to download development versions or build your own with certain things enabled and then only you could do requests. Yeah. Over HTTP3. So yes, both in Chrome and Chrome for your desktop or laptops and in Chrome for your Android, there are certain flags you have to set. I think the same is true for Firefox as well. I could be wrong there, but it's not available for use for consumers. If you're a developer, then there are about 38 versions of HTTP3 implementations that are recognized. And I think there is a link on GitHub that I can share which sort of lists all the recognized implementation and the support or intercomparability matrix. That's available as well. If you're a developer and you want to tinker around, I think there's a HTTP3 stack available for Python that's easy to use and set up. What I'll do is I'll put a link to that so that you can quickly get started. On the things that I wanted to know as a part of this question is, is the protocol itself or the standard itself, is that frozen at this point in time or even that is changing as we speak? So I think the final draft for the protocol, I mean, it's not yet finalized. So it's likely to see some changes, but it's for the most part influx right now. That's why you see that there are 37 different implementations because there's no standardized spec for it. Oh, so you said when you said 38 implementations, you mean all those are different implementations? Yeah, like for example, most of the CDNs have some implementation of HTTP3. Akamai definitely has one. There are other CDNs which have an implementation for X3. And now the problem with these implementations is it's our interpretation of HTTP3, which for the most part, IEDF has recognized some core features and that is what most of these implementations are also focusing on. So most of these implementations will be some version of the final draft, but because the spec itself has not been frozen, you're likely to see either inter-compatibility issues or some features not working as expected. But it's important to understand what's fundamentally changing. And one of the things you would have realized is like for app developers, nothing is fundamentally changing. You still, all of the benefits that you had on HTTP2 are going to be carried over to HTTP3. The semantics don't change. What that means is the way you made your HTTP workflows, that's not going to change. You still have the same verbs or your concepts are the same in the sense you still have your get, post, put, options, delete, all of those HTTP methods. Your status codes remain, you will have HTTP caching, that's not going away. And basically, you don't have to worry about it at a very nuanced level if you're an application developer. You should get some hands-on experience with HTTP2. That'll give you very good insight into HTTP3. HTTP3 has some of this other stuff that's going on, which if you want to know where the industry is headed, you can pay attention to. All right, I'll come to the other stuff in a bit. I also, and in case you find any of my questioning related to the other stuff, feel free to pull those points in. So the next question that I have written while I was listening to your talk is, you say that the encryption right now is at the transport layer level, which was earlier at the HTTP level. Is that accurate understanding for me? At my end? Yeah, yes. So on HTTP3, quick as a transport protocol itself implements encryption and it encrypts the body and the headers separately. So everything, all of that is encrypted. So is it fair to say that it ups the security level and the privacy level as well of the communication? And is it harder to find out what the communication actually entails and contains from a man in the middle or from a surveillance point of view? You're partly right. In the sense, HTTP3 fundamentally is very resistant to MITM or man in the middle attacks. But the flip side is also true. Like for example, a lot of organizations today, install, if you're working in a corporate, they might install their own certificates, which I use to pretty much intercept every request that's going out of your system and they're pretty much re-signed using the root certificates of the organizations. HTTP3 will fundamentally break that. So I mean, yes, from a privacy standpoint, it's good, but it's yet to be seen how some of these actual day-to-day implementations or how people actually use the internet will be affected by HTTP3. The answer might be simply that, if you're using your office laptop and your office has certain policies, HTTP3 will not work, which means you either use HTTP1.1 or HTTP2 and you have to just simply live with that. What that means for app developers or infrastructure folks is you cannot take HTTP3 for granted. There is no scenario in which you can deploy HTTP3 and assume that clients will be able to connect. And that's not because of lack of adoption. Even if hypothetically all the servers work supporting HTTP3, it's unlikely going to happen. There are two factors driving this. One is the resistance to MITM that I pointed out in corporate networks and some similar scenarios. The other one is HTTP3 is fundamentally based on UDP and the network operators don't like UDP for a lot of reasons. It's actually blocked in a lot of networks. So in those networks, your transport layer protocol would not even work. So you would not be able to do that. This is interesting. Why do network operators not like UDP? Does it overload their infrastructure or what is the reason? So I think traditionally UDP has not gotten the attention and love that TCP has got because TCP was the de facto standard for browsing the internet. UDP was used for other use cases which has not seen that much adoption or attention. And UDP floods are a common attack vector. And the simplest thing for networks to do is just block them and not have to deal with it at all. So that's one way to get rid of the problem. Okay, makes sense. Okay, you already shared that browsers, not all browsers supported. In fact, none of the current active end user versions of the browsers will be supporting. Some development will release with turning on some flags might support it. Are some support- I think Safari does. Safari does. Yeah, I don't want to, I mean, I'm not sure. So anyone who's listening to it, take this with a pinch of salt that there are browsers that don't support it, there may be some browsers that have started supporting it. Are there also support restrictions at an OS level at all or no, all OSes? No, I don't think it'll be restricted. Yes, a lot of this will be down to the... So if you look at TCP and HTTP as a stack, most of the applications are, in some case, operating system provides some support for it. But let's say you have an operating system that doesn't fundamentally support TLS or HTTP, you can still have your implementations at the application layer. That's going to continue to be true even if we move forward in the evolution, HTTP. And that's one of the reasons you see that Google and Facebook, when they were going ahead with implementing HTTP 3 on their native applications, still able to make it work. That's because they have their network libraries which supported HTTP 3. Got it, got it. There was one point that you had said that a client and a server has to negotiate which of the HTTP protocols we are going to use. Is it going to be 1.1? Is it going to be 2 or 3 when it comes in? Can you tell us how is it decided in the practicality which one actually gets used? Is it something that we as developers or creators of website, web developers, people at the web server level who are configuring the web server, who has that power and what is the method of resolving this which one actually gets used? Okay, so that's an interesting one, right? The short answer is as infrastructure owners of your own stacks, the only control you have is you can enable or disable a particular protocol. Like you said, in your engineering server, your HTTP.1.1 is available for granted. I mean, you take that for granted. HTTP.2 is a checkbox or a flag that you go ahead and enable. That's the only control you actually have. The rest of the control is on the clients. For the most part, it's the browsers. If you're looking at websites and apps, API endpoints and so on. And that's actually an interesting question. So just to give you an anecdote, right? When IPv6 was in its nascency, one of the things that happened was, you know, how do you pick whether you should connect over IPv4 or IPv6? And one way to resolve that was you run a race. You make a connection over IPv4 and you make a connection over IPv6. And what started happening was IPv6 always lost. So in, you know, how do you decide whether you should connect over IPv6? And this is during its nascency, right? What a lot of people started doing was started giving IPv6 a head start. So IPv6 connection would be tried. You would wait for about 200 milliseconds, 300 milliseconds and make the second request. And, you know, whatever connection works, you stick with that. You could go with a similar approach for HTTP protocols as well. If you don't want to wait for the renegotiation, right? There are mechanisms in which during the handshake itself, you can figure out what are the support protocols and, you know, switch protocols accordingly. But let's say you're not talking about the browsers or you want to have your custom client which is that much more optimized. You do the same thing, what has been tried and worked. You could give HTTP3 a little bit of a head start and then make your request over HTTP2. Or you could extend that and give HTTP2 as well a little head start and make request over HTTP1 so that you're not waiting for the timeouts. That's essentially what you're trying to avoid. You don't want to wait for that whatever, two second or three second timeout that's built into the clients. And I think the one call out is you just have to make sure your infrastructure doesn't get overloaded because suddenly you can have like a lot of these A-B tests going on. Things can get a little crazy. So, I mean, when you're experimenting, it's fine but on your production workloads, you might want to just stick to how handshakes are or protocols are renegotiated. Leave it to the handshakes protocol. All right, so from what I also understood over here is that the client browsers basically would be having a higher say on what eventually gets used on our server, but on the server, we are prepared to handle both of them right now or all three of them in the future at a later point of time. Okay, before I ask my next question, I just want to announce if there's anyone who wants to add questions over here, please feel free to drop it on YouTube or on Zoom comments and I'll try to take those questions as well. So, the thing that I wanted to ask back to this in this bit that if as a person who is hosting our applications on our server, how do we know that we have to check that box of HTTP 3 from now because there are enough attempts coming, connection attempts coming and maybe my website or web application is performing slower simply because I am not catching up to the changes in technology that is happening. So how do we detect that that now I have to be prepared to handle HTTP requests as well? HTTP 3 requests, yeah. Short answer is you don't. The spec is not finalized. It's not seen any meaningful adoption right now. So you don't have to worry about HTTP for short. To start off with, you can just start off by enabling HTTP 2, get some experience with it, figure out HTTP 2 as well increases some amount of server resources at your end. See if your stack and your infrastructure is comfortable with that, you're not running hot or anything of that sort. And when HTTP 3 makes big news, you will see it. There will be a verdict on whether HTTP 3 is performing much better than HTTP 2 or not because based on so the initial data, there's the CPU utilization can go up anywhere between two and nine times with HTTP 3 for making the same workload that was used to deliver the content over HTTP 2. So I just wanna make sure that you're not mistaking this for your server-side workloads. That's your server-side app workloads. This is just the workload for delivery of content over HTTP. So depending on the scale of requests to handle, that can quickly add up. And there's not a lot to be gained. Even if you look at the fundamental changes, it's a step in the right direction. It helps performance in certain conditions, especially in situations where you're seeing a lot of packet rate transmits or packet drops. That's those are the scenarios in which it would really help. If you have a stable internet connection, which is just working really, really well, then you're not likely going to see a significant difference. So it really boils down to that. Okay, I'll get to the CPU usage thing in a bit because obviously I have follow-up questions like why will it increase and all, but sticking to my line of original questioning. So is it even possible for me to know today on my web application or website, how much of the traffic is over HTTP one versus over HTTP two? Can I find out today from my website? So how do we, and what are the tools available for us to know that information? Logs or what? Yeah, so for the most part, you'll have to dig through the logs on your end. If you're using a CDN, then the CDN is likely going to give you a good split of how much of that traffic is going over HTTP two. But you can make a simple assumption. If you've enabled HTTP two, you can make an approximation of what's the ballpark percentage of traffic split over HTTP two in your region. In this case, it would be India. So India has slightly higher HTTP two adoption because Chrome adoption is higher in India. And most of the browsers also are supporting HTTP two is supported universally almost today. So at least in the browser. So that support is really good. And at that point you can make a safe assumption that 50% of all traffic if HTTP two is available will be over HTTP two. Got it. And my last question before I ask you to just tell me about the other stuff. You told me two things and one thing which is there's something called a TCP slow start and I want to understand a little bit better. What do you mean by TCP slow start? And this and other than that, I also want to understand that are there any latency reduction because of potentially for HTTP three because in HTTP two from what my understanding goes you save on round tip trip requests and other things because you have to make fewer connections and so on and so forth. And that's probably why also we are seeing such a big 30 to 40% speed difference in loading of different resources. Are any of those things further improved on in HTTP three? That's also one of the things I want to do. Yeah, so let me start off with the slow start first. So TCP has a protocol, right? A transport protocol. Fundamentally, it was designed to keep reliability in mind and not sort of overwhelm your connection and in turn be available by the way. The goal with TCP was to make sure that when there's communication between a client and server that's very reliable and the rate at which packets are sent is optimal. And also the goal is to sort of reduce any packet loss retransmission and so on. So one of the problems with TCP is how do you decide how many packets can be sent at any point in time? So basically what TCP does is it starts off slow. It starts off by sending a few packets and what happens in TCP is once the client receives a packet it's supposed to send an ACK, ACK is an acknowledgement that it's received the packet and all of these packets are sequenced. So the client and this happens in a continuous manner. It keeps sending the ACK for the packets to receive. Now TCP starts off with an initial threshold that's supposed to be slower threshold depending on your custom stack. If you're running your server infrastructure you might set that threshold to a fairly high level and a lot of CDNs do that straight off the bat. They understand that network conditions have improved over the years and they are aggressive in TCP optimizations. And you'd hear that a lot of TCP optimizations or something that both cloud service providers and content delivery networks do very frequently today. So again, you're optimizing for that initial TCP window and you start sending packets more aggressively and because anytime there is a packet loss that window stinks by half. So you'll again go back to half the number of packets you're sending and again you have to go up that ramp. Got it. That's essentially TCP slow start is the TCP slow start and that's sliding window. Okay. And now the latency roundtrip time things on anything on that front? Yes. So the latency and roundtrip that's an interesting one. So part of it is true with HTTP too because you don't have to establish those additional TCP connections. You save on the Synapse that entire TCP connection establishment completely. You're able to make the same request on a simple TCP connection. Your bandwidth is fixed. So there's an inherent optimization. There's another optimization that's interesting and sort of new and that has to do with the advancements in TLS with TLS 1.0 and 1.3 is specifically the zero RTT handshake. Basically what that means is the client and server are able to establish the TLS connection fairly quickly. They don't need to have one RTT or two RTT to establish that connection which was the case previously. And just a small call out. So these optimizations have to be a little nuanced. You have zero RTT if it's a reconnection as in if the client has already established a connection with the server. It is one RTT if it's a new connection. So again, I mean all of these performance optimizations get a little nuanced when you look at it. There are some inherent challenges with this in FETP3. You have to keep rotating some of the tokens that you issue for zero RTT. Zero RTT is inherently susceptible to replay attack. So you technically don't want zero RTT connections to be on transactions that change the state. Again, so those are things that will evolve either as a standard or clients going forward would default to certainly best practices. So we'll have to wait and see. For now if you're implementing a TTP3 with zero RTT you have to make sure that if you're changing state you're not doing it on that connection with zero RTT because you're setting yourself up for a potential vulnerability. Okay, okay, it's slightly technical but I have some sense of what you're saying. Let's run through the other stuff but before we end this conversation just to know what else is interesting out there. Sure, so I mean we spoke about HTTP3 and that comes with Quick. Quick is also a protocol in itself and right now the plan is to ship it with HTTP3 but like TCP it can be used in other use cases as well. So once HTTP3 rolls out it'll be interesting to see where else Quick can be used and that has great potential, right? It really solves some of the fundamental problems that TCP had. So it'll be interesting to see where that goes. There are also advancements planned in Quick and that's, so what happened with Quick was the initial proposal was to include things like multi-path forward error correction because it's UDP built into the protocol but that would complicate things a lot. So what they did was the first version pretty much took away all the good to has and went with the code feature set. Quick version two is likely going to see forward error correction and multi-path built in. Some of the other capabilities that HTTP3 proposes are connection migration, right? Like for example, today you're connected over wifi and if you have a power card you switch over to your mobile connection but that connection cannot be reused. Your client, whether it's Zoom or your browser actually sets up a new connection because it's taking a different path. So with HTTP3, there are proposals where you'll be able to reuse that same connection even though you're using a different path. That has its own challenges and that's a little more nuanced topic because if you have to reuse connections the server and the client must agree on a connection ID and that also means that it's another vector of fingerprinting given that we put in a lot of effort to make sure that it's fairly secure and given the current climate, right? In terms of privacy, that could be a problem. So there are proposals to work around such issues but we'll see what really goes into the final draft. The same is true for some basic security capabilities that's going into HTTP3. Some basic optimizations that existed in TCP like the Cinec cookie is being an advanced version of that is now part of the protocol itself. It's called stateless routing where the server need not keep track of every connection that's being made. And there are some potential use cases in load balancing as well, which is very interesting. And all of this is at the protocol level which makes it very interesting to watch out for to see where the future developments head. Load balancing on protocol level, how does that mean? What does that mean exactly? So, yeah. So like for example, today when you think of load balancing essentially you have a load balancer which sends a request to server A or server B. HTTP3 will come with a connection ID and using that connection ID and... Always directly go into the server that you've actually established the connection with. Yeah, I'm oversimplifying it greatly. It's a lot more nuanced than that. You don't want to expose your internal setup to the external world. There are multiple proposals on how exactly it should be implemented. But that's in the works. In a very simple manner, you can use that connection ID to figure out which server you should head. Got it, but we still need the load balancer to decide which server to connect to hook this particular client with. That you see in the end of... Yeah, you don't have to get to the application layer data to begin with. Like for example, today your load balancers work either on IP, round robin or application layer data. Like there's a cookie set which decides which way it goes. So you don't need all of that overhead. And because with that and also this connection ID is not encrypted. It's one of the very few fields that are encrypted in HTTP3. So it sort of is an interesting optimization. All right, the next point, which is CPU usage. Why is there such a high CPU usage, Satya? I mean, two or three reasons for that. One, TCP, I think the industry has spent. And what I mean by that is like, for example, if you look at all the CDNs, they've invested heavily in optimizing the TCP stack because a lot of the traffic is on the TCP stack. HTTP 1.1, HTTP 2 is on the TCP stack. And that's very, very well optimized. The same goes for your cloud service providers, right? Like AWS, Google Cloud. All of them know that the underlying stack, any connection is gonna be over TCP for the most part. The UDP, that UDP is not used. UDP is definitely used, but it's not widely adopted. So you can get away from using it. Okay. So in short, I think the UDP stack will see some optimizations come in. And that pretty much sort of cascades to the additional server utilization. That's one part of it. The second part of it is with HTTP3, individual packets now need to be encrypted. So with HTTP 2 and HTTP 1.1, your entire payload is encrypted. Now, each individual packet is encrypted twice. So your payload is encrypted, your headers are encrypted. So that also adds significantly to the computer overheads. Does it also mean as a Google Chrome user, we'll see more frozen browser windows because you'll have to do the decryptions at this end as well, right? I don't think it'll lead to frozen windows. It'll be interesting to see if it drains my battery. I mean, that's, that would be interesting to see. But look, I think these are key thing problems, right? As we see better adoption, Chrome will optimize that particular stack. So you will not see any noticeable changes. And in my mind, that's one of the reasons it's not turned on by default. That's why you have to go to the config page or Chrome internals and set a flag to turn it on when it becomes more mature, it'll be available by default. Got it. There's one question that has come in from the audience, Deepak Kumarjan. Okay, there are actually two questions. The first one we have probably already touched upon a bit, but no harm in just reiterating it once. If a site is upgraded to HTTP 3, is it possible to have a fallback to HTTP 2 in case it isn't supported? Satya, that's the first question. Yeah, I'm trying to understand that question a little bit, right? So we'll take a step back and figure out how protocol upgrade or handshake upgrade really works or what the handshake protocol is. So in order for you to go from, so HTTP 1.1 is the default, right? If you're moving from, okay, so what really happens in HTTP, when you establish an HTTP 1.1 connection, you switch over to TLS and once you've established the TLS connection, you get an option to upgrade to HTTP 2. In the response header, if you have an alt-SCV header, you have an option to upgrade to HTTP 3. So at the point in time where you're already, if you're already at HTTP 2 or HTTP 3, you've already made that decision. So it's not in the benefit of that client. And the other thing you have to keep in mind is this is a single TCP connection. If your TCP connection was already established, there's no reason for a downgrade, right now. I mean, at least I can't think of a good reason for a downgrade. The fact that the connection is established already means that the client and the servers support that protocol. So for that connection, it's unlikely to make a difference or you shouldn't have to switch back and forth. But the subsequent connections, you can simply stop advertising HTTP 2 or HTTP 3 and it would stay at HTTP 1.1. I'm not, I know this doesn't specifically answer. But I think I can translate your answer because I feel that what Deepak is trying to ask is that if in case I enable HTTP 3 on my server, what happens if a client doesn't support it? I guess that is the frame of mind from which this question was asked. And in that case, I think your answer clarifies it that from a client point of view, you never directly jump on to HTTP 3. You start with HTTP 1 to go to HTTP 2 and then to 3. So it's never going to be a case that you are on HTTP 3 and you'll have to come back to HTTP 2. So that makes perfect sense. So this is how the actual strategy works. The second question that Deepak had asked is, what are the major use cases of HTTP 3? So would you have any answer? How would you respond to this question? So I mean, the use cases for HTTP 3 is any good use case for quick. So wherever you see benefits, quick is bringing to the table. HTTP 3 is likely to extend the same benefits for those use cases. And if you sort of extend that a little further, what are the use cases for quick and HTTP 3? The best answer right now is, wherever HTTP 2 has benefited you, HTTP 3 is likely going to bring you incremental benefits but not take away anything. So it's not a downgrade from HTTP 2 but it could bring you additional benefits. And what those additional benefits are, like I said, the difference between HTTP 2 and 3 for the most part is on the transport layer. And what that means is you're fundamentally shifting from TCP to UDP, depending on which region, what kind of network conditions you have, quick is likely going to give you better benefits over TCP. And quick and TCP are not the only two competing, I mean, quick is not the silver bullet. There are a lot of conversations where BBR uses a congestion control algorithm over TCP has better results compared to quick. So I think there is no good answer to that. It's a very nuanced response and it really depends on that particular use case. There is no silver bullet saying, for all banking use HTTP 3 for everything else use HTTP 2. But a good use case for quick is definitely in streaming. Yeah, yeah, because you have to send in so many packets and stuff, I guess. But the fact that they are not ordered wouldn't that hurt because streaming would want the packets to be reasonably ordered, right? Because streaming is one of the use cases where a few drop packets doesn't really matter. Your players are resistant to few drop packets. But that said, your app stack actually never sees any of these nuances. It's all handled at the protocol level itself. Quick has mechanisms to reset the packets that are dropped. And that's one of the reasons, none of this is over UDP, it's on quick, which has some built-in capabilities for it. All right, great. I'll do a quick time check. We are almost through with our time. I have just the final two questions to close this conversation off, Satya. When do you think, if you look back at the time it took for our HTTP 1.1 to become mainstream or 2 to become fairly widely used case, in how much time should we turn on the news to see if we need to do some checking the boxes on our servers for HTTP 3? Like in six months time, in a year's time, in two years time, when do you think we should tune in to the news? Yeah, that's a good question, right? I just turned off sharing because my laptop was opening my login screen. Yeah, yeah, yeah, yeah. Okay, so, okay, now if you go back to the timelines, right? One of the things you'll notice was HTTP 1.1 was stable and it exists for a really long time. We've had two major versions being upgraded since 2015. So in the span of five years, we've had two changes. And if you look at HTTP 2, it took roughly about two to three years for it to, once it was finalized, two to three years to get to a very healthy adoption rate. So I would say in the next two to three years, HTTP 3 adoption rate should have increased if everything pans out. But there are, like I said, HTTP 2 had some benefits or advantages that HTTP 3 is sort of going against. And that is moving away from TCP to using a UDP based transport mechanism. So we'll have to see how various entities we are. I must ask this very pointed question. Is there any part of you who is skeptical that this will never be adopted? Is that question for me? Yeah, yeah, yeah. Do you have any part in you that thinks that HTTP 3 may never get adopted? No, it's not that it's never going to get adopted. There are some fundamental challenges. I think we need a little bit of collaboration in the ecosystem for this to work. A lot of the firewalls need to catch up with HTTP 3. And if it's a hardware stack, it takes a little longer. You have a lot of network elements that are very, very fine-tuned or optimized for HTTP or the TCP, I should say. And these are not network elements that are within cloud service provider or in your homes. This is the plumbing in between. That is usually a layer which is resistant to change. So I'm a little skeptical about the adoption rate, but I think going forward, we'll definitely see HTTP 3 or at least quick getting wider attention and some adoption. But it's going to take some time. Yeah, the web is messy. It's standards are messy. Changing protocols and all of these things are messy. It takes time for any of these things to come up. And I think we have learned that over the last 25, 30 years or so that how slow this changes. And maybe many of the times, only one organization pushes it to the point that everyone else just agrees to it and starts implementing it. Sometimes that also ends up happening. And as an extension of this point, what are the channels that we should keep a check on? What are the mouthpieces or sites or places that we should keep a check on to know about the developments of HTTP 3? So that's actually an easy one given today's climate. You have monopoly almost in the browser market, which pretty much drives the clients. So you can just follow the Chromium developments or Google Chrome, Chromium specifically, because I think a lot of people are using Chromium as their underlying browser engine. So that's an easy one. Once you see that your Google Chrome has widespread adoption for HTTP, that's when you should start paying attention to it. Right now it's not turned on by default, so you're in good shape. In terms of what are the resources you should look at? CDNs are gonna be likely the first organizations to start picking up on some of these technologies. Which is already started. Which is already started. And in some ways, once all the major CDNs have adopted these technologies, you will see that a lot of sites are automatically switching to using these new protocols. Same thing happened with HTTP 2, I think, because Google Chrome was in such a dominant position, they could turn on speedy for at least the Google properties, which in turn created that drive for quicker adoption and CDNs picked up, which meant that almost every organization could just turn on HTTP 2 without making any changes at their end. The same thing is gonna be true for HTTP 3. I don't see any changes to that approach. Got it, so keep a track of Chromium, the devlog, Google Chrome, and any of the Google's announcements and announcements by CDNs. I guess that these are the things that sums up for if you are an application developer or a web developer or you're building websites and things like that. And there have been a lot of announcements, to be very honest. Akamai has put out public flows. The other CDNs have put out a lot of content which is available, good quality content. And there are a lot of open source implementations for you to even tinker around and try if you want to just experiment with HTTP 3. So everything is fair game at this point in time. Perfect. And with that, let's wrap this session up. Thanks a lot, Satya. It was great to hear about HTTP 3, something that we thought that we'll get over in 15 minutes took clearly a lot longer and there were so many things to talk about and discuss. So thanks for your time and thanks for the very to the point presentation that you just gave for us. All right, before we end this session, I just want to remind everyone that we do the content that weekly sessions almost weekly every Saturday at around this time, we are having a conversation with someone from the dev side or someone from the design side, maybe content side as well. In the upcoming few weeks, we have a session on responsive data visualizations on the web. We are doing another conversation on type scaling and all. So there are a few very interesting sessions that we have planned in the next few weeks. So do stay tuned in on hasgeek.com slash content web and also use that space to put in your proposals if you want to hear on any other subject area on the content websites, umbrella of topics, please put in your proposals or your request on hasgeek.com slash content web and hope to see you next Saturday on the next session that we have. Thanks a lot, Satya.