 I'm going to talk. Okay, perfect. So this is Mira's lab then. We're welcome. And today we're going to talk about HP3. Why should we all care? You're going to tell us, right? Yeah, I hope I will try to still tell my best. So hi everybody. Welcome. And I hope that you are curious about HP3 and I wish you to enjoy this talk. This is like an exceptional opportunity for me to speak at your Python. As I told, I'm Miloslav Pujman and I'm streaming from Prague. I work here for Akamai Technologies in the protocol optimization team. You don't know that Akamai runs one of the largest CDNs in the world with more than a quarter of a million servers around the globe. And our peak traffic is more than 150 rabbits per second. The reason why I'm speaking here today about HP3 is that our protocol optimization team enabled quick for this whole Akamai network. If you are wondering how quick it's related to HP3, then you are watching the right video because I will explain you what quick and what HP3 are in next approximately 40 minutes. I assume that you have at least a rough idea what happens under the hood when you visit your favorite internet website. But don't be afraid if you are not an expert in network protocols. This is an introductory talk, so I will start with a quick recap to make everything clear. So when you visit a website, a browser issues HTTP requests. And HTTPS is a simple text protocol. You can speak HTTP yourself even without a web browser. You open a TCP connection, for example, using Telnet or NetCAD and write your request. You need the return key twice and the server should set a response back. That's all. It's that simple. And I'm pretty sure that most of you know this, that most of you know this protocol because it's here from the 1997. And most websites still use this protocol from the 90s. The difference from the 90s is that most traffic today is encrypted. It's using HTTPS. HTTPS means HTTP over TLS. And HTTPS connection is encrypted using TLS. That's the difference. But besides that, nothing changes. Once I open TLS connection, for example, using OpenSSL, I can write my HTTP request as just I did before. If you want something more sophisticated, we should move to 2015 when HTTP2 was published. Unlike the first version, HTTP2 is a binary protocol. So I won't show you an example here. HTTP2 has many nice properties. Probably the most important one is that it supports multiplexing. It allows you to download multiple objects at parallel over a single connection. To see why this is useful, let's look at an example how a web page is loaded using HTTP1. With HTTP1, to browser us typically open multiple connections per server. To allow at least some parallelism. In most cases, browsers open six connections per server, per domain. And so let's look at this slide. First, an HTML page is loaded. But that's not all. Like typical page today consists of hundreds of objects. Images, JavaScript, styles, fonts, advertisement, tracking codes, whatever. So you have to load much more. So browser us open another five connections and download six objects at parallel. The yellow and red lines show measure the time necessary for a connection setup. Further requests have to wait until the one of connections is available, until one of previous connections finish, and so on. Further requests are queued for even longer. In this program, the green bars are waiting times, a latency, a time between a server sends a request and gets a response. And as you see in this example, these green bars and the waiting before can be much longer than the very tiny blue bars at the end, which measure actual download times. The described issue is called HTTP head of line blocking. HTTP requests are blocked until one of six connections is available to minimize consequences of head of line blocking. We invented JavaScript bundles or image sprites. The idea is simple. If you download fewer objects, you issue fewer requests and you spend less time waiting. But you can imagine that it may not be the best idea to download everything in one block, probably because you will be downloading much more than you need. With HTTP to browser us open multiple connections per domain open one connection per domain only. And then they use it to download all objects concurrently. The blue download times can be longer because the connection is shared, but avoiding the unnecessary waiting as a game changer. Today, most top sites use HTTP to. We say that when performance matter HTTP to should be used. Obviously, you should measure what works in your specific use case, but this is my generic recommendation. Because with HTTP one, you can be limited by latency by the distance to the server with HTTP to this limitation is to priest, allowing you to utilize most of the bandwidth. So we have two important HTTP versions. We have HTTP one, which is more than 20 years old and still good enough for most sites. And we have HTTP to which was standardized only five years ago, and which is still good enough, which is which is much better in performance than the previous version. In this situation, it's fair to ask why we need a new protocol today why we need a new HTTP version. What can HTTP free offer us. The answer is that HTTP free is completely different. It replaces foundations of the Internet. That's a brave statement. What are the Internet foundations. How does the internet work. Not an easy answer. The Internet is a complicated beast. The good news is that we mostly don't care, because all the complexity is hidden from us by something called TCP TCP is like a magic box. You write something to the box. And on the other side of the Internet, somebody can read it from their box. If something gets lost on the way, TCP retransmit it. If something is delayed, TCP reorders it. The TCP layer handles various network glitches for you. So you get a rival by stream, and you don't have to care how it is implemented. You can write anything to a TCP socket. In this talk, we care mainly about HTTP free. But TCP can transport anything. It can be FTP. It can be your emails over a pop free or SMTP, like, and hundreds of other protocols. This TCP protocol is older than HTTP. It's here from the 80s with the early implementations from the 70s. And we use it since then. TCP is implemented in your operating systems. Thanks to that, we can use TCP from almost any programming language, including Python. And we can do that with few lines of code only. You know, if you are used to high level APIs, then the TCP sockets may look complicated to you, old fashioned. But if you consider that these few lines of code get your data over the Internet, and how all this API is, I think it's pretty amazing. I have already told you that HTTPS is HTTP over TLS. And TLS is just another box that sits on top of TCP. It encrypts everything that goes in and decrypts everything that goes out. TLS prevents eavesdropping or data tampering. That means that nobody can read or change your payload. And similar to TCP, TLS is also protocol independent. So we can write HTTP to it. We get HTTPS. But for example, FTPS is HTTP over TLS over TCP. And now, what would be our options if somebody denied us from using TCP? TCP has a younger and less clever browser called UDP. UDP is primitive compared to TCP. If UDP, you send your packets and they may arrive or may not. And if they arrive, they can appear in any order. It means that your app has to handle any network glitch. Typically, UDP use cases include DNS, online gaming, or real-time streaming. So if we had no TCP, we would probably have to use UDP. There are other protocols. There are many other protocols. But in practice, only TCP and UDP are supported over the Internet by the devices in the wild. So if I had to use UDP and I wanted something reliable like TCP, then maybe I would try to build something like TCP on top of UDP. You know, some kind of abstraction to reuse my code between applications. And maybe that may not be the worst idea. Maybe the thing that we built on top of UDP can be actually better than the original TCP. Say hello to Qwik. Qwik is a new transport protocol design in Google. It emulates and improves TCP on top of UDP. Qwik is TCP redesigned and rewritten from scratch. Qwik has to implement everything normally provided to you by kernel of your operating system. It has to implement it in your apps in user space. And similar to TCP, Qwik retransmits and reorders packets so you get a reliable byte stream. And you don't have to care how it is implemented. The difference from TCP is that Qwik has the OS built in. Everything delivered using Qwik is encrypted by default. And when we compare this to TCP, Qwik can be also used to deliver any message including HTTP. At least in theory or in future. In practice today, we use Qwik almost exclusively with HTTP. But in this talk, we care HTTP. We care about HTTP-3. And this gets me to HTTP-3. HTTP-3 is HTTP over Qwik. HTTP version 3 is similar to HTTP version 2, but it is delivered using Qwik, using UDP instead of TCP. That's what makes HTTP-3 so interesting. It uses a completely different transport protocol under the hood than we used since the 90s. But one does not rewrite TCP layer just for fun. The new layer should give us some advantages. So what they are? The main advantage of Qwik is that it's multiplexed. It supports many independent streams, many independent logical flows within a single connection. Wait, wait, wait, wait. Did you listen to me? Didn't I tell you that the multiplexing is the main advantage of HTTP-2? I did. But let me explain the difference. HTTP-2 is multiplexed, but the underlying TCP is not. So when you are transferring multiple objects, multiple requests in parallel over HTTP-2, HTTP-2 has to serialize them into a single TCP stream. And that single TCP stream is guaranteed to deliver in order. And now imagine what happens when one packet is lost. Everything is blocked. Everything has to wait for that one packet. One lost packet completely stops everything. Your operating system can have the object you need in its buffer, but it won't give it to you because the TCP layer promised to return everything in order. This is called TCP head of line blocking. So with HTTP-2, we got rid of HTTP head of line blocking. But we got the TCP head of line blocking instead. Slightly better, but not much. Quick, unlike TCP, supports independent streams. It knows what objects are in what stream. Thanks to that, when one packet is lost, only objects delivered in that packet are blocked. Others can continue. This can improve performance and user experience on lossy connections. Another important quick advantage is that it offers a faster connection setup. I probably don't have time to go into much details here, but the problem with TCP is that it has too many layers. So you need one round trip to set up a TCP connection. Then you need at least one round trip to set up some encryption context to set up your TLS. So first useful data can be delivered in the third round trip at best. With Quick, you have one layer less. The Quick handshake and TLS handshake can happen at the same time, shortening the connection setup. But we can make this even faster, even better. Quick also supports so-called zero RTT. With the zero RTT handshake, we can include useful data with the handshake. So there is no extra round trip that you have to wait for. This is possible when a client knows some secret from before. So this is available for reconnections to servers that you spoke to before. I have to warn you that there is one danger with the zero RTT. And it's a security issue that it allows replay attacks. For that reason, you should enable it only for important requests only for requests that can be replied without any side effects. If I have to be fair to TCP, I have to mention that there is a TCP extension called TCP fast open that should enable something like zero RTT in TCP. But it does not work very well in the practice. I will explain it in a minute why. Another quick advantage, an interesting one is that Quick supports connection migration. Unlike TCP, Quick does not identify connections using an IP address and port number. Instead, it uses a unique connection ID sent inside the connection. Thanks to that, it's possible to switch your connections. For example, you disconnect from Wi-Fi and get to some mobile signal and your connections, your requests should be able to continue without any interruption. I already mentioned that Quick is always encrypted. What's important here is that Quick encrypts not only your message, but also traffic control, metadata, traffic control, headers. Because with TLS over TCP, your message is encrypted. But the things around that, like the headers, the metadata are not. And this increase encryption is important for your privacy, obviously. But not only for that, it is also important for future development of the technology, for future development of Quick. Because there's a problem with internet, and the problem is that there are many devices, many boxes that try to help you. Whereas boxes think that they understand to the protocols that you are using, so they somehow interfere with it. And they claim that it's in your best interest. But as these boxes get old, their understanding gets worse. And at some point, they are likely to cause more harm than help. Take an HTTP2, for example. The old boxes can assume that any TCP on port 80 is HTTP1 because it was always like that. So with this assumption, the boxes can break any HTTP2 traffic there. And that's the reason why we use HTTP2 over TLS only. We use encrypted HTTP2 only. Because when something is encrypted, miniboxes don't see it. So TLS can hide HTTP from the miniboxes. But the problem remains because the boxes still see the TCP layer. I mentioned TCP fast open. The thing that's something like zero RTT in Quick. It was never widely adopted because there are many boxes that consider traffic with this extension invalid. And they can drop this traffic. So TCP fast open can become like TCP slope. By encrypting everything in Quick, including traffic control, including metadata, including headers, we refuse any help from the miniboxes. Thanks to that, the Quick protocol can evolve in future. We say that Quick avoids ossification. There are other advantages. An interesting one is that Quick offers much faster development. Because it's implemented in the user space, it's implemented in apps, you can upgrade Quick with any software upgrade. You get a new browser version and you can get a new Quick version. Quick also offers better options for congestion control or loss recovery. That's a very interesting problem. My protocol optimization team spends a lot of time on that. The idea is that you want to send data fast enough to utilize your bandwidth, but not too fast to cause packet loss. But no technology comes with advantages only. So there are challenges for Quick too. From my point of view, the main problem with Quick is that it's new. So for example, internet providers may not expect regular traffic over UDP and they can rate limit it or even block it. And because Quick is not implemented in operating systems, apps have to provide their own implementation. And if we compare random implementations with the TCP stack optimized since the 80s, quality of Quick can be questionable. And probably the most discussed challenge is the CPU usage. There's definitely an area where future development and optimizations can help. But let's move from theory to practice. What you can try today? What's the current state of HTTP free? I hope that you are curious. But before I get to that, I have to clarify one important distinction between two Quick implementations. We have Google Quick and IETF Quick. Google Quick or G Quick is a proprietary open source. Is that possible? It's open source available in Chromium but developed by one company by Google. It's not standardized and it can change with like any Chromium or any Chrome version. On the other hand, IETF Quick or Quick with no prefix is the upcoming standard. And IETF working group is finalizing it so we will hopefully have an official version soon. These two versions are not compatible. And support for them differs. Is that when we discuss support, we have to mention support for what of these two versions? Let's start with G Quick. It's quite likely that you are already using Google Quick today. When a Chrome browser connects to a Google server, it will very often use Quick. It will use Quick in most cases. Chrome is a quite common client and Google services are also popular. So significant portion of traffic today is already over Quick. In Akamai, we enabled G Quick for all media customers. That's a lot of traffic too. But besides that, I'm not aware of any other large scale deployments besides Akamai and Google. I heard some rumors about like closed internal deployments in large scales. But nothing officially available, nothing public. But strictly speaking, G Quick is not HTTP-free. And this talk is about HTTP-free. HTTP-free is HTTP over IETF Quick. It's the upcoming standard. And we can expect support for HTTP-free for the IETF Quick in all major browsers. Chrome Canary and Firefox Nightly support IETF Quick for some time already. And Apple announced support for it in their next operating systems. So I believe that we will get to official support in all browsers soon. Speaking about clients, special mention belongs to Chrome. Which has an experimental support for HTTP-free. The reason is that Chrome is not only a command-line tool, but it's also a C library. A very popular C library. So if, yeah. So if, for example, your car will speak HTTP-free in future, it will be most likely thanks to Libcora. If you want to learn more about HTTP-free, then my first recommendation would be talks from Daniel Ostenberg, the author of Chrome. On the server side, you can notice that CDNs content-driven networks do not want to miss this opportunity. CDNs are investing into Quick and HTTP-free. So if you are using their service, you may get HTTP-free without an extra effort. If you prefer a do-it-yourself approach, you may like that HTTP-free support for NGINX is in progress. The code is developed in a separate quick branch, so the support is not stable yet, but I believe that they will get there. Just be aware that enabling HTTP-free is just the first step. I know something about it because our team is spending last few years by looking for the best quick configuration. We did a lot of optimizations, but we are still not done. If you want to see other HTTP-free implementations, visit the GitHub profile of the ITA working group. There is not only the list of implementations, but also a compatibility table, which shows you what client works with what server. It's pretty green these days, so the future is green. If you want to try HTTP-free yourself, I recommend to download Firefox Nightly. You can enable HTTP-free in its configuration, in the About Config page, and you can visit one of the test pages. NGINX has a nice test page, Akamai offers one, and others can be found at the ITA working group. So, we have two versions. What version to choose today? Google Quick or IETF Quick? You know the answer. If you want to develop anything today, go for the standard IETF version. Its support is experimental only today, but this should get much better soon. Akamai supports Quick since 2016. We deployed the proprietary version back then, and we are updating it since that. And because this was the only way how to offer Quick advantages to our customers, to our users. And it's still the only way today. But in practice, Google is backporting changes from the IETF Quick to their version. So, it's quite likely that both versions will converge in future. This is very similar to how HTTP2 was born. There was a proprietary protocol called Speedy. It was developed in Google, then IETF took it, standardized it as HTTP2. Today, everybody uses HTTP2 and nobody cares about Speedy. If you want to see whether a support supports HTTP3 or Quick, look for an Alt-SVC header. The Alt-SVC header is sent by servers to inform clients that they can switch to HTTP3. This mechanism is different than switching from HTTP1 to HTTP2. The reason is that for deciding between HTTP1 and HTTP2, you can open a TCP connection and then negotiate like what version to use. With Quick, you have to open a separate Quick connection and you have to know where you can connect and whether you can connect. By the way, this Alt-SVC header can be quite useful for my job, for protocol optimization. Because all Akamai servers support Quick today, but we send this Alt-SVC header only when we believe that our clients will benefit from it. Python. This is a Python conference. If you want to try HTTP3 in Python, go for the IEO Quick library. It's the only Python library mentioned at the IETF Quick working group. I tried it and it works nice. Yes, I tried it. That's all. I write Python a lot. But my code does not use HTTP3 yet. Sorry. Why? Let's ignore the fact that Google Quick is available in Chrome only and deployments of IETF Quick are experimental at best. Let's discuss how can I use HTTP3 once it is standardized, once it is widely deployed. Because I'm afraid that to use HTTP3, I will have to change how I write my code. I told you that the main advantage of Quick is multiplexing. Is that you can issue multiple requests in parallel. To use that, I need some kind of async programming. Probably async.io in Python. Another issue with HTTP3 is the alt-svc header. Your client has to remember what servers send this header, so what servers support Quick. Your client will need some kind of storage. And the storage will be needed for 0RTT2 because you have to remember secrets between sessions. But before speaking about migration to HTTP3, I should probably talk about HTTP2 first. Because I guess that most of you do not even use HTTP2 in Python yet. The most common Python library for issuing HTTP requests is url3. I use it because it's a dependency of popular libraries like PayPal or Request. And it does not support HTTP2 yet. If I had to choose library for HTTP2, I would use HTTPX. It has a nice API and it supports async invocation. So if I have to use HTTP3, I would like to have something like HTTPX, a library that has a nice API. It supports all the protocol and it's asynchronous. And I want a library that chooses the best HTTP version for me. When I write Python, I don't want to care about HTTP versions. Until I have that, I will probably use HTTP1 from the 90s. Let's summarize what I have presented to you. We have had new transport protocol called Quick. It's a TCP replacement on top of UDP. HTTP3 is similar to HTTP2, but built on top of Quick on top of UDP instead of TCP. For practical usage, I want to issue HTTP requests without caring about HTTP versions. Similar to browsers. When I use my browser, I don't care what HTTP version it uses. I like a view that HTTP is only one. And the versions are just mappings to different transport layers. So HTTP1 and HTTP2 map this one HTTP to TCP and HTTP3 maps this protocol to UDP. Should you care? I think you should be at least aware that something important and interesting is happening. All the big players are involved with that. The change is quite low level, so it may not affect you directly. But if you are programming for the VAT, then you should at least keep an eye on it. If you want to learn more, I already recommended talks from Daniel Steinberg's. If you want more details or more critical view, look for talks from Robin Marks. Obviously, I should not forget about like the IETF Quick Working Group. And even more links are at my personal homepage with these slides. And you will find the link in the schedule or like in the chat. And that's all from my site. Thank you. Thank you very much for listening to me. Thank you. Thank you. I will close a plate of applause and I will remember to do me that. Thank you. Thank you. That was a really nice, really interesting talk. So we have time for maybe one or two questions, not at all. So I would pick one. Mansour is asking if he's saying, Quick is adding another layer of encryption below TLS. I think that's also a question. It's not. And if I also asking if he's adding up performance delay or more, if adding more delay because performance. Like Quick is not adding another layer. Like you can use HTTP without TLS. So compared to plain HTTP, it's adding another layer. But if you compare it to HTTPS, it uses the same layer. It's just like integrated to the protocol itself. So you have to use this encryption. Speaking about performance like quick is faster. Like not, like, not obviously in all cases, but in most cases, and there's the primary reason why we use quick to make the internet faster. So again, like you should measure what works in your case. You should optimize it. But like the goal of quick is to make the internet faster to, to fast them connections. Okay. So Philip is saying, I find HTTP to isn't common Python code base. For example, load balancer for like engine X. Or when. So he's asking if it do you think that the same will happen with HTTP three so that it is not so popular or not so wide use. I think there are like libraries for HTTP to that there are, like there will definitely be libraries for HTTP free. Me personally, like, I would recommend for most use cases, like write your apps as you do so far. Like for speaking at the server side and let the server handle that. That means that you it's very likely that you have, for example, engine X as a proxy in front of your app. And that engine X should probably speak HTTP free. I probably don't want to write HTTP free server in Python, not for production use case. It can be nice learning tool. But if it's not for production, I would use like either some production server or a CDN. Okay, perfect. Thank you very much. Thank you for presenting the Python. And if anyone wants to continue this question or to ask more questions, there is a tunnel in discord talk HTTP three. You can go and find me. Me to slap there. Yeah, I will be there. Thanks.