 I'll just speed it up a little bit then. I'm gonna talk about HTTP3 today for everyone. So I'm Daniel, I've been here before. This is my 11th talk at Foster, actually. And if anything, this sort of audience makes your pulse go up a little bit. But I started doing HTTP a long time ago. I have this little hobby project. Some of you might have known it. I started it and I've worked with it daily since then. It's fun, you should try it. I work for Wolf SSL doing curl stuff every day. But today I'm going to talk about stuff done within the organization IETF, where I'm participating and working with a lot of other people. So this is what I'm about to try to squeeze in by 45 minutes or so. So we did HTTP1, 2, and we're going to 3. There are some problems along the way. That's why 3 is coming, right? I'm going to explain a little bit about them. How quick is the solution to those problems? And a little bit about HTTP3 on top of Quake. Some of the challenges with running this tomorrow or now. And something about it when it's about to come, or maybe, I don't know. So I talked about this last year. Yes, this is not exactly the same, I promise. But we might recognize a few things because HTTP3 is still number three. So this is how it started. This is a picture from the 90s. We started with HTTP1, actually. It shipped in, well, HTTP1.0 actually shipped in 1996 and then took a long time and then we did HTTP2, 2015, and now we're working on HTTP3. So let's get a little bit faster. So HTTP is this protocol. You've all seen it, you've all played with it. It looks like that. A client asks for something from a server, request a new server response with a sort of code and the headers and the body and stuff. You've seen it. It's there. It's going to be like that in the future as well. So HTTP, we started out this journey over TCP, while we still do it over TCP. So TCP, networking 101. It's like a chain of links. Basically, IP packet is connected. So we send them over the wire and establish a connection. There's a three-way handshake. Ping, ping, ping to get established that connection. Three ways, three times across the network and we get that connection and we can send data over with in a reliable byte stream sort of. And it's in clear text. So anyone reading or snooping on the network can see your traffic. TCP, the first RFC, 1981. Celebrating 40 years next year. Trusted, we know it, we've used it, it works, it's there. Yep. But we're talking HTTPS these days, right? So HTTP works over TCP, but we're not talking HTTP clear text. We're talking HTTPS. And do we? Yes, we do because page loads done by Firefox on the internet today. Well, this is divided by continents from Firefox data. So yeah, there's somewhere around 80%, maybe up to 90% of all page loads, HTTPS. Basically, the web is going HTTPS. The same data from Chrome split on platforms instead. Yeah, some of them upwards 90, some of them only 70. But that trend is rather clear. It's growing, the web and everything is going more and more encrypted, we're talking HTTPS. So it's not only HTTP, right? HTTP over TCP, but we're talking HTTPS here. So we're adding TLS to the mix. TLS is that transport layer security we add on top. Or sort of, so that we get security. And with TLS, we usually also add another extra round trip to get that TLS connection. One actually sometimes four back and forth. Anyway, it depends on version and stuff. So with this, we get privacy and security. Nobody can snoop on your traffic, nobody can change your traffic, and you know you're actually sure that you're talking to the right server in the other end. Excellent, that's what we need. So this is what we have. We have IP, we do TCP over that, we have TLS to secure that, and we do HTTP over that. HTTPS, what we know and use and love and every day, everywhere. So back again, TCP is done over, sorry. HTTP is done over TCP. So we started out, actually, HTTP 1.1, the great improvement to HTTP 1.0 was shipped in 1997. It was a great improvement to 1.0 because we suddenly fixed how we used TCP connections. Now we could reuse TCP connections. We couldn't do that with HTTP 1.0, yay. But over the years, people come up with, we wanna have many images, many overships, many style sheets, hundreds of objects on every page. Wait a minute, we only have how many connections? Let's come up with new ways to do more connections in parallel. So all the browsers today do many connections in parallel and you can trick it to do even more parallel connections. So we went to that world. So now we're into a world with a lot of parallel connections. So many that we have to kill them off really often because there are too many to keep up all the time. So the median number of HTTP requests done per TCP request in Firefox is one. So basically, they're all created, created, created, created, used, shut down, shut down, shut down, created, used, shut down. And TCP is really inefficient in the beginning because it has a slow start period. So they never get up to speed. So within an HTTP 1.1 world, we can never use TCP good enough to actually get fast transfers. That's just it. It doesn't work. And we also got this fine thing that we call HTTP head of line blocking because we have a finite number of connections, typically six per hostname by a browser. And your typical website has many more than six objects. It's a hundred images and style sheets and stuff. So you have to use those six connections to download those hundreds of objects but on which of those six connections will you put your next request for the next object? It's a really difficult problem. It's like, you know, in the supermarket when you have those line to cashiers, which one is the fastest? Not the one you pick, right? That's gonna be the trainee or something. No, you never know, or annoying customer head of you. The browsers and the HTTP clients, they have this problem. And of course, the entire world and everyone in here too, have come up with fine and ingenious ways to work around these problems. Put everything in a single file. Put all JavaScripts in a single blob. Do whatever fun things. Invent new hostnames so that the browser will do many more connections. So yeah, you can be creative. But to sort of combat all those funny workarounds and take a unified effort, we created HTTP2 and shipped that in 2015. So all those funny workarounds into the protocol instead of now we're going into a world with a single connection to per host. Not 48, one. That's much better because now TCP can get up to speed. We can saturate the bandwidth in both directions much better, congestion control, everything is fine and dandy. We have many parallel streams over that single connection. So we get many more, much better parallelism. But instead we fell into another trap. Oh, now we've got TCP head of line blocking instead of the previous HTTP head of line blocking. So now we have, typically, we can have 100 streams over that single TCP connection. And when we lose one little packet because we have a corrupting network and we need to retransmit that single packet, 100 streams are waiting for that single packet to get retransmitted. And then it gets retransmitted and 100 streams can continue again and then we lose another packet. Everyone has to wait. Not ideal. And at the same time, a long going trend in the internet is this thing we call ossification that's been going on in parallel with everything else. Ossification is this effect that, you know, everything is full of boxes or the internet is. Riders, gateways, load balancers, Wi-Fi routers, whatever, a lot of boxes out there. And they're all installed at some point in time in your networks and they run software and they all know how the network works at the time you install that box or someone installed that box. They very rarely actually know how the future is going to be and they're all afraid of the future. They know sort of if it was a zero today it will never be anything else but zeros. We know that. And we never upgrade these boxes. We buy them and put them in the networks. We upgrade our servers, we upgrade, well, we do actually some of us do, well, every year or so. And the browsers, they upgrade themselves basically every week sort of automatically. So the edges, they upgrade. The middle, it's stuck in time and they know how the network worked when we got those boxes and they stole them five years ago, 10 years ago. I'll just sort of really drive this in. So that's me and there's the website and all of those little machines in there, there are middle boxes somewhere. They're stuck in time and their mission is to drive the network as the network worked when we installed them. Today, yesterday, a few years ago. This has a fun effect, fun. Osification, let me call it. And some things like, wait a minute, you can do HTTP2 in clear text? Right, you just upgrade to HTTP2. You don't have to do that over HPS in theory, right? Because a lot of these boxes, they know that if you speak TCP port 80, that means HTTP 1.1. So whatever you speak on TCP port 80, those boxes will help you speak HTTP 1.1. So if you speak something else, they will just ruin your traffic completely. This to the extent that no browser today uses it. And one browser in particular, specifically intended to, yes, we were going to support it until they tried. Then figured out that a lot of boxes help you. So no, you have to hide your traffic in encryption to make sure that it actually works. Wait a minute, we can improve TCP instead of doing it. Why not invent a way to send data earlier in the TCP handshake? We can call it TFO, TCP fast open. It's an invention from 10 years ago, so excellent. So if you talk to a server before you can send data already in the first packet in the TCP handshake, that's three-way handshake, much faster you can earn, you can gain a lot of latency there. Let's do that. It took five, seven years until all kernels of these three big platforms supported it. Yay, now we can do it on Windows and Mac and Linux. The browser started to try it. Does it work? No, those boxes, they know how a TCP header is supposed to look like. So if you set those little extra bits in the TCP headers that says tried TCP fast open, the number of they will just float away. So the TCP fast open turns out to be false, slow open in quite a number of times. So now no operating system and no browser enables TFO by default because it doesn't really work across the web. And sort of this list goes on. If you've wanted to try to introduce a new transport protocol next to TCP and UDP across the internet, it doesn't work either because your Wi-Fi router at home, it can only net those two protocols and nothing else. It'll just stop already there. So no, it won't be any new transport protocols either. So we can invent stuff over to HTTP level instead. Broadly it's a compression algorithm, much better than G-SIP for some purposes. Excellent, wait a minute. A lot of those boxes, they know HTTP compression. That means G-SIP. So if you send compression, it'll help you. So X% of those connections will simply get ruined traffic because some boxes in between have fixed your traffic along the way. So of course the browsers can do Broadly, but they only do Broadly now over HTTPS connections. So hide it from all those boxes. Encrypted is fine. So basically this trend goes on. So this completely stops future innovation in a lot of these protocols. If you do it in the clear, someone along the way will ruin your day and it's not going to be fun. We're not going to be fun. So we need to encrypt more, more encryption, make everything a random noise to every box in between and they don't know what it is. They can't ruin it. So that's the way we're going. So we need to improve them in spite of this ossification thing by simply making sure that they cannot see what you're doing. They cannot sort of, this is the way it's going to be forever just because we did it today. That's one of the things that sort of led to the creation of the quick working group in the IETF. That's the official logo and I just wanted to drive a special thing. Whom quick is a name, not an acronym. Whatever you think. It doesn't stand for anything, it's a name. Why is it all in uppercase? It's that we can shout it, I think. But it's the name, it doesn't stand for anything. So a lot of companies immediately took interest in this. All of these are and more are sending people to this working group in the IETF and it has been a fierce participation and activity since it started in 2016. So this is a new transport protocol. Even though I said you can't do it on TCP, UDP layer, but still. So this is built again on experiments and tests from Google. Pretty much they did it with HTTP too. You remember they did speed it, that became HTTP too eventually from the IETF. Now they did their experiments and to make everything more complicated this time, they call it quick and let me go back to that. So they deployed this already in 2013 so it's been going on for a while and they have a fairly, you might not know this, but they have fairly well used client and some popular services and that makes them an excellent place to do these wide scale experiments. Does it work really? A billion clients against our billion servers. Yes, they prove that it actually works and it improves a lot of things. So they took that to the IETF in 2015, let's make this a standard instead of us making funny things in our corner. So yes, the IETF agreed with a bunch of caveats and conditions and it created the working group for quick in 2016. Two of the conditions, or I could basically, possibly this first say that. My first line here, Google created quick, their version of quick, basically sending HTTP two frames over UDP and sort of demuxing it in the other end and inserting it in your HTTP two stack. Sort of just translate it, send it over UDP, translate it back as HTTP two. And in the IETF they said, well that is a really weird layer of violation, why would you do that? Well in just one mushy layer. Let's split it up, let's make it a transport protocol and an application layer protocol. So we do quick and we do an application layer on top of that and we need to have proper encryption and not your homegrown thing. We're going with TLS. So when you do a new transport, you can do a lot of things. You can fix this head of line blocking problem. I'll get back to how that works and why not fix that TFO thing I mentioned, send data earlier in the handshake. Now I have a sort of a new chance to fix all those problems we've had in transport since that first RFC 1981. A lot of transport people, they have sort of a piped up list of fun things to do in transport. Woohoo, the chance to redo it. So a lot of fun things are coming in. And for example, why tie a connection to your IP? We did that back in the 80s and I don't think anyone really expected the explosion of IP addresses per host. So when you're walking out from your Wi-Fi to your cellular network, you have two different interfaces in your system and you have different IP addresses. What happens with your connection? In a TCP connection world, that's tied to your IP address which is tied to your network interface. You have to open a new connection. Not with Quake but it's because it's not tied to your IP. You can transition between the network interfaces because it's tied to your connection ID. And more encryption, always encryption. There's no clear text version of Quake. It's even more encrypted than TCP with TLS because it's fewer parts of the header in the clear. So basically a few clear text packets in the beginning in the handshake, then everything else encrypted. Just encrypted, basically noise to everyone in between. So this is hopefully going to make it sure, ensure that we can do future development. Hopefully this means that we can make a quick version two in the future without a load of those boxes knowing and drawing conclusions from the traffic patterns because there shouldn't be much of a traffic pattern. So we build this on top of UDP. We pretend basically that UDP is the IP. So we move everything up one layer. We do everything over UDP. We write a reliable transport protocol in user space and we're sending everything over UDP. It's a little bit like reinventing TCP and TLS in one layer and do it over UDP. And no, it's not UDP. We're using UDP, it is not UDP. So UDP, you send whatever you want. It might end up in the other end. It could actually change order. There's no resend, there's no flow control. But we're not doing that. We're using UDP. So we have to add all of those stuff on top of UDP. So we add connections and reliability and flow control and we have streams and we have security and stuff on top of UDP. So QUIC is a new transport protocol. TCP is out, QUIC is in. So QUIC also then provides streams, for example. So what we had before, for example HTTP2, we have it in other protocols too, like SSH or SCDP. Individual logical streams within the connection. So we can actually send many logical flows with a single connection, right? And you can actually start them from both ends and you can actually make them bi-directional or unidirectional. So a little bit more complicated than before. And possibly the biggest point with these streams is that they're independent. What does that mean? Why sort of they're independent? That means that if we send them, we know what's in each packet when we send QUIC. If we lose a packet along the way, we know which streams those packets affect. So if we have 100 streams again, we lose a packet, we know which streams are in that lost packet. The other streams can go on. So maybe there were two streams affected by that packet. Then either 98 streams can go on and continue and just those two streams waiting for that lost packet have to wait, retransmit that lost packet and those two streams can go on. They're independent, but internally they're all reliable and in order and everything. So just to sort of illustrate that with some fun image that I could draw. So TCP I'd like to illustrate again like a chain with individual links. And here if you're sending two streams, the green stream and the red stream, if you lose one of those, the green one, the red one also has to wait, right? Because it's one single link. If one link is gone, we have to wait until that link is repaired. But in quick, the streams are independent. It doesn't matter if we lose a yellow one, the blue one can go on anyway, they're independent. So that's a transport protocol, quick. So when we do, we provide a transport protocol called quick, we put an application layer on top of that, right? And the application layer gets streams for free because it's in the transport layer. So it could then virtually be any protocol. And when the Google first brought this to the IETF, the first discussion was like, yeah, we should make sure that it works with more than one protocol, not only for HTTP. We should start with DNS and HTTP, great. But a little time consuming, so we'll just postpone the DNS part, focus on the HTTP one. But it doesn't matter, but the intention is there and the separation exists. So it is actually an application layer on top of a transport layer. So there will definitely be more protocols on top of quick, once quick ships because there's a sort of a piped up demand. A lot of people are just waiting for quick to ship to make sure that they can make their protocols translated over to quick instead of TCP. So that's quick, that's a transport protocol. So when we talk about HTTP three, that's how we make HTTP over this new transport protocol. HTTP is the same as always, right? That's me and that's the server and we send the request and there's a method and path, headers, maybe a body and we get a response and headers and a body. I showed you in the beginning, that's HTTP and it's still gonna be that. Most people are not going to care about how it's being transferred over the wire, doesn't matter, this is HTTP for us. HTTP is the same but different. We started out with a protocol in ASCII, we translated it to HTTP two, we did everything binary and we did all the, multiplexing in the HTTP layer and now we took away the multiplexing and put it in the quick layer instead but it's still basically HTTP two like done over quick. So just back again to how this sort of translates to a network stack view. This is the IP and this is the old one, right? HTTP one, HTTP two, pretty much the same. Right, HTTP two actually says that you have to use TLS one or two but anyway, it's the TLS layer. So but instead now we're building on UDP and we put this huge quick blob there, TLS one dot three and we have HTTP three on that and the streams have sort of moved down one layer. A lot of new stuff, right? At least in sort of, there's a lot of things to make quick work but HTTP three is not that different than HTTP two. So if you just look at sort of a basic feature comparison, yeah, we have a different transport, we have streams, we can do clear text, we cannot do clear text versions anymore but in practice we don't do that with HTTP two anyway because of what the browsers do. The streams are independent but that's sort of a minor difference. We're going to do header compression and we're going to do server portion, possibly better early data but they're all sort of, yeah, we're doing the HTTP two like features but it's now HTTP three because it's over quick and we're changing the prioritization thing over HTTP three. It's actually completely gone right now in HTTP three. It is messy in HTTP two and in 35 minutes you can hear a talk by Robin Marks about it. It's fun subject but no, you're not going to fit in the room anyway. So is this going to be faster, better, cooler, whatever? I think it's going to be faster thanks to quick because quick is going to make your handshakes much faster. Early numbers from Google from years ago when they tried their version show that upwards around 70% of the connections they saw were able to establish in a zero RTT way, basically no handshake at all because you'd had the connection with it before and zero RTT is much fewer RTTs like than five or seven of what you get with TLS and TCP and you get early data that actually works so you should be able to send a lot more data much earlier in the handshake. So yes, it should be a really good latency improvement for those first important packets, especially in HTTP and such protocols. And the independent streams are really going to be good, especially for you with hood crappy networks which typically haunt us in the Western world but the worst network you have, the better quick in HTTP three is going to get I think in comparison to previous protocols. By how much, I won't show you any numbers because we're still in early days, it hasn't been shipped, the protocol isn't done, there's no done codes and nobody's actually willing to stick out their necks and say exactly how much faster it'll be or not so it remains to be seen. Numbers from Google, from the Google Quick days showed that it can be better, from a little better to much better depending on your use case and what you're doing. Okay, so how do you get to this world of HTTP three over quick that is done over UDP when HPS colon slash slash means TCP, right? So if you ever try to check the internet, you'll see in these URLs in a few places that they're not really possible to replace, right? We can change these to anything else. We have these HPS colon slash slash URLs, we have to work with them and they actually imply that you connect to TCP port 443 with a TLS establishment afterwards or on top. So how do you get to HTTP three from that? You see, you use this fun header, this service over there. So you have an old service header, so you actually have to connect to the service server with the old style legacy HTTP two or HTTP one first and as talk to it and you get a server back that says, hey, I'm also available over there for this period of time and I've speak this protocol over there. So that's how you're supposed to do this per the spec, so of course your browser won't sort of ill do this in the background, of course, and try if it can do this, it can upgrade and use HTTP three the next time you go to that server. But you also end up in this fun situation that UDP wide scale, really high speed internet data, that's something new. Most organizations and companies that have basically blocked UDP already because that's basically mostly done for DDoS attacks and stuff, block it or throttle it. So when you do that, my server is over there and your client tries to connect to it many times, it won't work because your organization or your company have blocked UDP. So when you shut down your laptop and bring it home again, it'll work because at home you won't block UDP, but okay. So I'm sure that browsers and other users, they're going to raise these connections. Why not just try both, right? Try the old connection and the new connection at the same time. And as I said, it's going to be needed anyway because when you shut down your laptop and you bring it home and you open it up again, maybe it should be three works now, maybe it doesn't because someone is going to block it, one of the situations and not on the other. So there's going to be a lot of probing, trust testing, raising the mids against each other. And quick connections are verifying the service certificate anyway, right? So even if you just make a bet, it might be a quick server there and if it connects, you know because you validate the server certificate. So you know that you actually connect to the correct one anyway. And there's also another effort in DNS called quite a mouthful, HTTPS SVC, which is basically the old service header put into DNS. So you're supposed to look up in the DNS first if you'll be able to connect to the server or which server to connect to when you want to speak, it should be three. So that's how you're supposed to do it. So, okay, assuming this all ends and how will we do this? Will it work? Will HP3 be the best thing ever starting immediately? So there are a few challenges. First, there are some things. We do this over UDP. I mentioned a lot of organizations and got this block connection. So somewhere around three, seven, 20% of connections from clients to service will just fail because someone along the way has blocked UDP because UDP is bad and it's mostly DDoS anyway, DDoS and DNS. So all clients that need to have fallback algorithms and they have to work transparently because that's what your users want to use, right? And this also have this, of course, the reverse incentive that since everyone is going to fall back anyway, it's pretty easy to block UDP because most clients will just silently fall back anyway so it's going to be interesting. And right, it's a challenge for the deployment side. If you run, you run up, you deploy your servers but the traffic, it looks like a DDoS attack. So you need to handle that in new ways that you didn't do before. Actually, I think the deployment of the service is going to be possibly the most challenging things because you're not going to run things, you're load balancers and everything, it's going to be new things. And so okay, it only takes about three times the CPU as before to sort of serve the same bandwidth as you did with HP2 and that's quite a big investment. Maybe that's, we'll make you hold it off for a little while but I mean, it's still early days and why does it do this? Why, how can anyone accept this? But first it has this amusing situation that UDP is really unoptimized in Linux and you would imagine that UDP is really dead simple. Why would that be more inefficient than TCP? But we've been actually been polishing TCP for decades because we've used that for high-speed transfers. UDP, eh, not so much. So there's a lot of things to work on to make sure that UDP is faster in the kernels and for example, we have really crappy APIs for UDP from user space so they're really not made for high-volume UDP speed transfers. That is also being worked on and of course there's no hardware offload for Quik and anyone who knows a little bit about serving in TLS in a big server form, you have hardware offload to take care of the crypto stuff in TLS. That's not there for Quik yet because TLS is different than Quik. I'll get back to that about now. So when Google took their version of Quik which then confusingly, it's also called Quik. We call it Google Quik to sort of, but Quik became Quik in the ITF and they had their own crypto and they brought it to ITF and ITF said, we can have it like that, we need to have TLS 1.3. This was actually slightly before it became an RFC and everything but sure, we now have TLS 1.3 but TLS is made to be done over TCP and now to just be a little bit technical here but we send records of TLS over TCP basically and over Quik we send messages because TLS records contains messages. Basically it looks like this. I'll just show you a little image. So this is how you do it over TCP. So you have to send frames like this with messages inside but that seemed completely pointless. Why would you have those frames? You don't need them when you do it in Quik because Quik is a new protocol so we can just send the messages. Fine, that's sort of get rid of the crap we don't need just send the messages. Well apart from the little detail that no TLS stacks have APIs for this. So yeah, we just have to fix all TLS libraries, right? ECPC and they also need a few other crypto secrets exposed from the TLS layer to the Quik stack. So we just need all TLS libraries to be fixed first. Sure, I'll get back to that as well. Sure, you're right, all these implementations are in user land, right? Which is probably not always a bad thing because it makes it really easy to iterate and try out, right? So when Google trusted it out in the experiment it was really easy to iterate. They could sort of bump their versions every other day, upgrade the browser, upgrade the server and just try it. It worked fine and sort of. But it also makes you as an application author, you have to get married to one of these libraries and hope that it'll stick around for a while. And there's no standard APIs. Of course, you get married even closer to one of these. And you have this kernel user space transition all the time. So back again to the APIs, to the kernel. So how much time do we actually waste by going back and forth kernel space user space? So then your obvious question is will this then be moved into the kernel? Because I mean that's where we used to. TCP is there, right? We used to having the transport protocols and stuff in kernels. And I don't have the answer for this. Maybe it will. I don't think it'll happen soon. I know there are some efforts to do it, but there are also some new fun interactions between quick and HP3 which should be in the kernel and how and would anyone writing a kernel really want the new sort of TLS implementation in the kernel? I don't know. I have my doubts that it'll be a fast development. So I don't think so in near time. So what about tooling? There's a lack of tooling, of course. This is a new area, right? We're throwing out TCP. We've used TCP for quite a while, right? TCP dump, who hasn't used that, right? No, not anymore, right? All those concepts about segment numbers and windowing and stuff, no, that's gone. So now we need new tools. And okay, why Shark is there? Of course, you get it the latest and you can actually use that today and it's excellent. And there are tools coming like QVis. QVis is sort of a standardized logging for quick service and implementations. In general, actually, QVis is a visualizer of those logs. So there are tools coming, but there's definitely a shortage because this is really new stuff. So a little bit thin there. A lot of work to do. So okay, when will this ship? And this is actually, so the quick working group has a charter when it says we have some milestones for when we will ship and it says there it's going to ship in July 2019. I'm only slightly disingenuous because they actually argue that there's a change suggested that will actually be applied, I think in the coming week and that they will remove the year completely and that there's no milestone anymore. But I would say that maybe it'll be around July 2020, perhaps, I talked about the HCP three many times and this is one of these slides that tend to change the most, sort of postpone it there maybe this time, maybe not, I don't know. It's really hard to tell. There's a lot of strong wills in the working group and they want to do it right rather than soon. So who knows? Maybe it's 2021. So there are a lot of HCP three and quick implementations. I say over a dozen here, but I think there are even more. So all those companies I showed you before, they all have their own stacks. So there's a multitude of them. You can get them in many languages. There's nothing that prevents you from jumping in immediately and start filling with them. There are monthly interrupts which actually proves then that the specifications actually work. Most of these implementations can actually interrupt with each other to a fairly high degree at least. And we're on draft number 25 right now. So what's good? Curls to Fortset, that's good, right? And if you really insist, you can also use one of the browsers. Or if you use one of those Canary ones or Firefox Nightly, you can enable it. I'll show you in a second. You can also enable it in those servers and there's an NGINX patch that runs with Kish which is a quick library that you can run your own experiments already today. They're also up to date with the latest draft version. And I mentioned already Wireshark can analyze these streams. But not everyone is on track yet. There's no Safari version and there's no word from these, you know, the big standard open source servers. I don't know when they're going to do it. I don't think there are any official news from any of them. And then there's this fun thing. And back again to the TLS situation. We only have to change all the TLS libraries. How hard can that be? Well, that's the pull request for OpenSSL to make the necessary change for the API, 87, 97. It's still being discussed, so it hasn't even been merged. So it'll take a while, I think. And then that's about getting merged into OpenSSL Git master until that gets shipped in a release and then deployed in your Linux servers. I think it'll take a while. So how do you want to run this immediately today? Then you just fire up one of these canary versions of the browsers and you enter that fun command line option. And if you're a little bit slow and just wait a few more days, I think you can change the 24 to a 25 because then they're going to upgrade to 25. And if you want to do it with Firefox, just find that little thing in there about config and you enable it. Easy. And you can find test servers even if they're basically no one. You can actually go to Facebook and Cloudflare runs some public ones. Other ones, they are just a few test servers. Early days still. You can do it with Cloud. And I support the latest draft version. You can do that server over there version approach. And I support two different backends so you can pick whichever library you want to use. So these two, NGTSP2 and Kish. And the fallback is tricky so we don't do that in Cloud yet. I don't know really how to handle that. I'll figure it out. You can try it. It's fun. And if you would do that, it would look like this. So you just ask for HP3 instead of anything else and it'll just show exactly like HP looks like. There's no difference really. So just done to show you some of the problems that we have to ship this. So you wanna do ship HP3 enabled curl when, right? First we have some specifications. They haven't landed mid 2020 maybe. And then we have these libraries that we are using to do all of this binary stuff. They're all in alpha versions, right? Because the draft versions keep exchanging. So they probably go on into ship after specifications. And then I'm just going to have a few deployed servers before we can do this, right? Only Facebook and two Cloudflare servers are probably not enough. Browser support would be good. They're actually pretty crappy still. And so then I wanna fix lib curl and then we just have to have those TLS library situation fixed. How hard can it be to ship a TLS library API? And then we can ship curl. It might not be tomorrow. So I'm looking into this crystal ball. How will this look in the future? So yeah, this will take time. I mentioned quite a few obstacles along the way here. So yes, it'll take some time. And I think it will grow a little bit slower than HP2 did. And then I would say that HP2 didn't grow really fast. But quick is also here for the long term. So I think quick is truly the TCP replacement. The only existing and sort of viable replacement for TCP that's had been done for a very long time. And there's some big effort here and a lot of big muscles and big companies behind this. So I'm pretty certain that quick will become the TCP replacement. Maybe not tomorrow, maybe not 2021, but it's going to be there down the line. And the future that once this is shipped, people are waiting to add more stuff, multi-pass forward error correction on unreliable streams. So you can mix reliable and unreliable streams within the same connection, like bring UDP back in too quick. Or what about partial reliability? Only a little bit. It's actually sort of pushed for by video people. And of course, more application protocols. So there's a huge demand of people waiting. Once this ships, there's going to be more work. There's going to be more work on quick version two as soon as quick version one ships. So there's a lot of people waiting to do things. And always when I say this, people, a lot of you are still waiting for me to mention something about web sockets, right? I'm sorry. That's not actually a part of HTTP at all. HTTP three, definitely not. So it's more of a thing on top. And this time we don't fix it either. You could in theory fix web sockets exactly as people did for HTTP two, if you wanted to. But that's probably not how it's going to happen. It's going to happen in a completely different way, which is a new API and things for doing TCP-like things in JavaScript over the web protocols called web transport. That's a draft for that. Right, now you can wake up over there. HTTP three is coming. It's going to be encrypted all the time over quick, over UDP. There are a lot of challenges to do this, especially that need the server side and might come in 2020. I'm always the optimist. When I talked about HTTP three last year here at Foster, I think I said mid 2019. Next year I'm going, I wrote a book about it or a document at least, it's there, it's free. And I just wanted to say that since the epistics in here is completely crappy and everyone is going to walk out the second I show you the next slide, I'm afraid the better way to ask me questions about this is out here or in the cafeteria in five minutes, but you can ask me questions now too. Thank you.