 at least knows some basics about networking. I'm not going to get into very many particle details here. So no bits and bytes. You can read up on that if you want to. These standards, neither quick nor HTTP3 are done yet. So some details may actually change before it ships. So just be aware. And sure, I'm going to make sure that actually I talk fast enough so that we can have some questions after my talk. I think there's a full hour until the next talk. Okay, so HTTP1 was defined. It actually, the first best specification is for 1996, HTTP1.0, and then 1.1 came 1999. So it's quite a long time. And then 2015, we published HTTP2. And HTTP2 has taken off, and it's really nowadays used, at least from browsers, it's used more often than HTTP1, at least for HTTPS traffic. So it's fairly popular, and it's had a widespread adoption. And now we're looking into what the next step then. Now we're going in HTTP3. Well, next. So HTTP started out done over TCP, or HTTP1 is over done over TCP. And I'm just going to remind you and use a little image here of a chain with links because that's how I view TCP. It's basically network traffic. And it's, you know, you set up a connection between two endpoints and you send data. There's a three-way handshake, three times ping-pong-pong before you have a connection. And then you send data and it resends lost packages. And, you know, you get a byte stream. There's, you send data from the other end, and it ends up in the other end, in that order. Or it doesn't end up there at all if the connection breaks, but sort of, that's basic for TCP. And it's in clear text, right? Everyone on the network can see your traffic. That's TCP. TCP created a very long time ago. It's from the 80s, and it's basically remained roughly the same over the years. But, okay, that's TCP then. But we used HTTPS today, right? And HTTPS is TCP with added TLS. And then you do HTTP over that. And HTTPS, looking at the Firefox trend, is used in, well, this graph ends in 20, by the year end 2018, but you see the trend is pretty clear. It's somewhere around 80% of all page loads are using HTTPS. So we're going into a world with basically, where HTTPS is going to be the primary protocol we're talking web traffic over. And looking at the same sort of trend from the Chrome's point of view, it shows roughly the same. Somewhere around 80% nowadays in share of page loads. This is, they have split it differently. So this is based on platforms. The other one was based on continents basically. So we're talking HTTPS here. So HTTPS being TCP plus TLS. And TLS then being the security layer that we add on top of TLS, sorry, on top of TCP. And that's how we secure TCP. And we do that for HTTP, one and HTTP, two. And this adds more handshakes, actually more back and forth. So the three-way handshake from before, you get some extra handshakes to add TLS on top of this. With TLS one of three, they fixed it. So it's actually not that many back and forths. But anyway, it's additional handshakes on top of this. And when using TLS and HTTPS, we get both privacy. So, and we get security. We actually know that we're talking to the right server. And we know that nobody can actually eavesdrop on your traffic so nobody can snoop on what you're doing. So that's what we like with HTTPS, right? But, okay, so HTTP done over TCP, as we're doing or have been doing for a long time, but it started out then, as I mentioned, HTTP 1.1 shipped in 99. We use HTTP 1.1 with typically a lot of parallel connections, right? With a browser, we use typically six connections per hostname and all bigger sites that event new hostname. So you have a really huge amount of TCP connections. To the extent that the median number of HTTP requests done over per TCP connection by five box is one. It's basically all connections that are used for one HTTP request before close it because it has to be closed because otherwise we would drown in TCP connections. So there's a very inefficient use of TCP. And TCP has also this slow start period. So it takes a while until TCP connections actually get up to speed. So closing them immediately like we do with HTTP 1.1 is really inefficient, which is basically one reason why no matter how much you increase your bandwidth, you won't get faster websites with HTTP 1.1 because we close the connections all the time. And we also have this little issue we call HTTP head of line blocking. So when you're connected to a site, you have your six connections to that host. When you wanna send the seventh request, there are many images on the site, you have to wait for the other one to one of the other requests to complete before you can issue your sevenths or eights and nine and 10. So they're all blocked by the one head of line blocking HTTP. And these are of course some of the limitations that people have created very imaginative workarounds for over the years. And those workarounds and those solutions were basically things that were taken into the work when HTTP 2 was made. So HTTP 2 was made to fix some of those problems that we experienced with HTTP 1. It shipped in 2015, not that long ago. So it uses then one connection per host, no more six connections per host, one connection per host and no more using funny host names to create new connections. And instead we do a lot of parallel streams within those connections, right? Typically you have 100 streams over a single connection. You can actually negotiate that, but I think 100 is by far the majority value used. So you don't have to, you don't get that HTTP head of line blocking anymore because you can always fire off and you will not always, but much more often. So you can send off more requests earlier, you don't have to wait for the other request until you send the next one. And that fixed it, HTTP head of line blocking and now instead introduced ourselves into the TCP head of line blocking problem. Because now we switch everything to one connection, right? Doing 100 streams over one TCP connection, we lose a little packet in the middle. All streams have to wait until that packet gets recent and then all 100 streams can continue again. So going into a loss of network with HTTP 2 using one connection is really crappy for you. That's the TCP head of line blocking. At the same time, we have an internet that has developed this habit or should I say a pattern that we call ossification which basically means that everything we do on the internet it gets stuck the way we introduce them. Over time, we can change anything and more because the internet is full of boxes. If you didn't know, it's a lot of boxes and they're all this sort of boxes. I mean, right there's gateways, load balancers, gnats, home broadband things and whatever, all those little boxes between you and the server in the other end. There are a lot of boxes and all those handle that net, the network data in various aspects. The forward IP packets, they terminate TCP, they load balancing HTTP, they do all these fun things that we need to make the internet work but they are all typically made, designed, written to handle today's protocols, right? They're implemented to handle TCP, the HTTP, the way we know how they work and that means that they're typically very, very bad at handling slightly new things when we try to change protocols. We come up with a new use of some bits in the header or new header values. A lot of these boxes, they don't like that new stuff. What's this? I threw it away. Introducing new things is really hard because of these ossifications. Things just get stuck in time basically because of all these boxes and these boxes upgrade a lot slower than the edges. You upgrade your browser, it happens automatically, basically daily or weekly at least. Even servers actually upgrade with some regular basis, not at all as often as browsers but still in the middle, however, that's they are stuck in time. Introducing new things is really hard. So just to illustrate what I said, this is the internet, lots of boxes. That's the server we want to access and that's me and we can go through the internet via a lot of boxes and all those are the middle boxes and they are the ones with this ossification effect. So due to this ossification that exists today, we never see HTTP2 done in clear text, for example. We need to do it with HTTPS which and part of the reason for that is that talking, changing HTTP, always speak a new protocol over TCP port 80 that will break because a lot of boxes, they know that HTTP port 80, that is HTTP 1.1 and we know HTTP 1.1, right? We can improve the traffic and fit a little bit with headers and stuff. So if we change the protocols dramatically, a lot of these boxes will just damage the traffic which really makes it hard to do HTTP2 in clear text over the internet, for example. Another fun little thing is, even if we then go down a layer in the protocol stack, looking at fixing TCP, one of these great inventions is the TFO, TCP fast open, is meant to reduce latency in the TCP handshake so you can send data earlier in the TCP handshake, a great idea. But there are a lot of boxes out there, they know how to identify a TCP header, right? There are some bits in there that says zero, nobody uses those except for the TFO case, right? And so throw those away, which then has the complete opposite effect for TFO, the fast open, it turns out that you actually have to resend that packet after a while because it vanishes. So using TFO, since we're talking about, we're in the Mozilla room, we fought with this in Firefox for a long time until we basically just gave up, sort of, no. The times it works, it's so rare, and it's so often that it actually slows down the handshakes, it's, no way. TFO, good idea, can't be implemented. And also, it actually also has this other little minor thing that TFO being a TCP change, you know, TCP is a kernel-based thing that everybody runs in the kernel, so it basically needs that this standard has to be set, it has to be implemented in code and then trickle down into Linux distros and into the kernels running on all these servers, which is also a very, very slow process. It takes a long time. TFO was written standardized for many years ago, I think it took about five to seven years until service actually started to support it. And Windows 10 is the first Windows version that actually supported. So it takes a long time until it happens and then in the end we couldn't really use it. Annoying. And another thing that is very similar to this is, you know, TCP, UDP, they are two different transport protocols and the IP stack, right? And you could imagine that you would create a replacement for these, like SCTP. But again, no, that won't happen because all these boxes out there, they know that TCP and UDP, they are the only protocols we care about. So they will throw away basically other protocols. So you cannot easily introduce a new protocol. These are the ones that they're here to stay. We will use these. Well, we can use ICMP and some others to some extent too, but these are the basic transfer protocols. So this of course then makes it really hard to innovate, change things because all of these boxes they make things really hard for us. Unless we encrypt. If we encrypt the traffic, nobody can inspect it. We hide it for them. They will just pass it through. They can't help us or improve the traffic. They can just sort of, it's just random gibberish to them. Excellent. So that is what we're doing then. In spite of this ossification, we wanna improve, we wanna change the world, right? We wanna make things better. So we need to do that in spite of this weird situation we're in. And that is what QUIC is trying to do or aiming to do. It is a new transport protocol. Just what I said we can't do. This is that, but I'll explain why or how. QUIC. Sorry, one moment. First I'll just mention that QUIC is not an acronym. It's the name. Doesn't mean anything. It's named QUIC. Whatever you read. It had a meaning once, but it's been removed. So QUIC, again, as everyone who remembers back several years ago, that basically this was how HP2 was made. This came from experiments and experiences that Google made with speedy. Taken to the ITF and out came HP2. HP2 being very similar to speedy. This time we're doing basically the same sort of operation. Google spearheaded with their version of QUIC experimented on the internet and they started a long time ago even before HP2 shipped. And they proved that sending HP2 frames over UDP over the internet actually works and is deployable. They have a fairly widely used client and some popular web services. Some of you might not try them. So they could really work it out and really prove to the world that it actually works to send HP2 over UDP. Yeah, it works. And it actually improves things to users and it actually helps in a lot of cases. So they took their protocol, called QUIC, to complicate things really a lot here. They made the Google QUIC. They took it to the ITF and said, let's make it a standard, which I think is a commendable thing to do and the right thing to do. And yes, in the ITF then they created the QUIC working group in 2016 and now you see we're after HP2 release at least. And then basically the ITF said, yeah, this is all fine but sending HP2 frames over UDP like this is a very HP specific use case. Let's make it a transport protocol instead of just HP over UDP. So they said, yeah, we should make it into a transport protocol and an application level protocol. So we shouldn't just munch them together like Google did. So they separated them. QUIC, since then, is growing into becoming the transport protocol and there's an application layer protocol on top of that. Then like HP3, the name HP3 wasn't set until last November. Before that it was only HP over QUIC but it's basically the same thing. So Google's QUIC is something different. Taken into the ITF, remodeled, completely new things came out in the other end. So we try to, in my talk here, I'm going to focus on the IETF QUIC and that's the real QUIC, that's the QUIC we're going to use in the future. The Google one is going to be left where it is or not something for the future to bother much about. So I'm not going to focus on the details of that. In QUIC, new transport protocol. When you do a new transport protocol, why not fix this TCP headline blocking problem? I'll explain how a little bit in a while. So I mentioned this three-way handshake in TCP with added TLS handshakes on top of that. We can fix that, right? To make sure that we get much better latency. And we can fix basically then, as I mentioned, the TFO problem sending data earlier in the handshake when we redo this, when we make a new protocol, we can add the TFO support, send the early data already when we design this protocol. And we can even make it a better early data support so we can send more, a bigger chunk of data than TFO actually can. And we can, of course, then add more encryption, always more bits, reveal less details of your connection to the middle boxes and to anyone snooping on your traffic. With both for your privacy and security, but also to reduce ossification. Boxes won't see your traffic. They can't make any wrong conclusions about what's in there. So this, hopefully, then, will put a pretty good foundation for future developments. I think the hope and I think a lot of people actually believe in it, too, is that when we ship this, we can actually trade. We can actually develop quick in the future, too. This won't get stuck as easily. So hopefully there will be a quick V2 even within a few years from now. Thanks to this being a good foundation and there's even a version negotiation thing so you can actually negotiate another version of quick. Fairly easy. And so to make this transport protocol, then since I said we can't introduce new transport protocols and we don't, then we just build it on top of UDP instead of replacing UDP. So we'll leave TCP and UDP. They can be like they are, right? We don't have to touch them. We instead use UDP as if it was IP, basically. It's just transporting data grams. So we implement a reliable transport protocol in user space on top of UDP. Basically a TCP-like thing on top of it. A little bit like TCP and TLS done by yourself. But I want to emphasize then that quick is on top of UDP and UDP isn't reliable. You all know that. I just wanna, because when I talk to people and say it's done over UDP and then people, all people, that's not reliable. They don't, there's no flow control and congestion control and things like that. And no, that's not available in UDP but quick is not UDP. Quick is a transport protocol on top of UDP. So all these resending flow control and everything is done on top of UDP. And to fix, sorry. The transport protocol, quick, then adds streams in the actual transport protocol. Pretty much if you're into, again, SCTP had streams, a little bit like SSH works. Similar to how HTTP2 solves it but this is in the transport protocol not in the application level protocol which might not make a big difference but it will for other application protocols. So quick provides the streams in the transport protocol. Which then is similar to HTTP2. You can do many parallel streams within one connection. And in the quick case, they are independent. So you can truly lose a packet on your connection and only those streams that are affected by that particular packet will have to wait. The other streams, they can continue. You can lose one packet and 99 streams can continue until that last packet is recent and that stream can go on. Which is sort of magic and it makes a new, it introduces new fun things. Just to illustrate it, I like to use my chain illustration because like in the TCP case when you wanna send many different streams, here's a green stream and a red stream, right? And the center of one single TCP connection. If you lose one of the links, like the red link is gone, the green one can continue, right, because the link is broken or the chain is broken. But when doing it with quick, they're independent. If you lose a link to one of the chains, if one of the blue links go away, the yellow chain can still go on without any problems. And of course then this being a transport layer protocol, we do application layer stuff on top of quick. And all these, all these, if there were more than one, right now there's only HTTP. But the application layer then gets streams for free because they're done in the transport protocol. And it could be any protocol. When the protocol was taken into the ITF, they pretty much one of the conditions to do the protocol in the ITF was that it should be made to do other protocols that just HTTP. And DNS was one of the protocols that was mentioned earlier on. It hasn't been mentioned much since then because I think very early on, the group also considered that it's too much of a job to take on a lot of protocols at the same time. So the emphasis has now been let's get quick and HTTP done first and consider other protocols after these ships. So once these ships, I'm sure there will be others who will join in and do other fun protocols on top of quick. So HTTP three is then HTTP over quick. And just to emphasize, this is again then changing HTTP, but HTTP remains the same, but not the same, right? HTTP is the, that's me and that's the server. And we still do request, right? We've always done. There's a method in there, you know, the verb, getPost, putPlub and there's a path and there are headers in the request and there's a body in the request so if we do a post or a put and stuff, exactly like before. Most of us will just think of HTTP like this and there's a response, you know, they are the same. There's a response code, there are headers and there's a body, like there's always been and this is going to remain. And most of us will just have to stick to that and we won't notice or care about any differences at all. But underneath, so of course, HTTP was the actual protocol party is ASCII based over TCP. And in HTTP 2, we changed that to become a binary protocol with multiplexing over TCP. But with HTTP 3, we go back to having a more simpler implementation because now the streams are provided by quick. And all of course being binary. So looking at the same thing then sort of stack-wise next to each other, this is how you view regular old HTTP 2 stack being IP, TCP, TLS and HTTP 2, right, very simple, that's how we do it. And now we're introducing quick instead, not as simple as, well, we do since we can introduce new transport protocol, we do everything with UDP and we add quick on top of that, we use TLS 1.3 for encryption and then we add HTTP 3 on top of that. That's HTTP 3 done over quick, quick uses TLS 1.3 internally. I'll get back a little bit about the TLS situation soon. And just if looking at the same data again, HTTP 2 versus HTTP 3, what's the difference really to sort of feature-wise, functionality-wise? They're very similar, functionality-wise. With HTTP 3, there's no clear text. You can never speak without encryption. There are independent streams in HTTP 3, so you can actually, you know, when the server delivers you images to your browser, they can actually end up in the client in a different order than the server symptom, which is going to be fun. But still, since the streams are independent, they can actually move independently of each other over the network. And because of their independence, HTTP 3 has a new header compression format because the H2 header compression format was relying on the streams being in that order and now they're out of order. And the server push better early data and much faster handshake. Zero RTT handshake in quick HTTP 3, which basically means that you, if you have talked to the server before, you can just set up a new connection without any latency at all. Well, one way of latency. So, okay, is this good or bad? How is this faster? And it's really a bit hard to say now because, because I put my slides in this order, you don't know this yet, but the HTTP 3 hasn't really been deployed much or used much yet, so I don't have a lot of, there's not a lot of numbers on how HTTP 3 actually works in the wild, so I'm using old numbers here based on Google Quick, which is, as I mentioned before, it's a different protocol, but this is HTTP done over UDP, so it's same basic fundamentals, but implemented differently. So if you're looking at those numbers, as before, this is another protocol improvement that really, really improves the situation to those who have the worst situation to begin with. So if you are in the 99th percentile of internet, you're probably in a really sad position, but Quick makes it really, improves things a lot for you. And apparently a lot of less buffering on YouTube. And they also proved that you can, that you can take advantage of the fast handshakes very often, which was also sort of a concern, how is this real R2T really a viable idea? But yes, a lot of connections can actually be set up again very quickly. And possibly three percent improvement on the average search page load isn't that much, but I don't know, I guess it's a small page and on average, I don't know. Okay, that is Quick, that is HTTP 3, and we have a world where we have HTTPS colon slash slash URLs everywhere, right? In the beginning of HTTP 2, there was actually this discussion if we should make a new scheme for HTTP 2, pretty soon it was more or less agreed that, no, we can't change the world of URLs, right? We have HTTPS colon, colon, things on quite a lot of places, they have to remain like that, they have to function, and we have to work with that, so we have to design an internet that can upgrade from whatever HTTPS colon slash slash is into the thing we actually wanna talk. But HTTPS is based on TCP, at least it has been, right? So TCP port 443, that's where we connect when we have an HTTPS URL, or is it? This is an area that hasn't really, really been settled yet, but I still explain how the specification says that we are going to upgrade to HTTP 3. That is by using an already existing header called old service, basically says, use this server over here, talk this protocol, it's the same as me. And this is an already established header, we already introduced it years ago for HTTP 2 basically. So it says this origin is also available on this server and this port with this protocol, which then could be the same server of course, but you could say, access this origin with this other protocol and do it for a week or a month or a year or whatever. You have an expired time. So you could do it for a minute, you could do it for a week. Ideally I would hope that we don't do it for minutes, but still, and there is also, no, I'll say that for some of the problems. But okay, I'll take it here instead. So will HTTP 3 then deliver it? Will this work actually? Can we do this? Anyone get any benefits from this? And here starts some of the problems. There are some challenges with doing it like this. I said, yeah, we shouldn't introduce new things, we should build it on UDP. But that is also new, right? We haven't really had internet-scaled, wide, fast, high-speed transfers over UDP. A lot of data centers, a lot of organizations, networks, they will just throw away stuff that is just too much UDP, throw it away. So somewhere around three, seven percent something, I guess it depends on who you're asking, they will just never be able to set up a quick connection. So that's still quite a large number, right? So all clients basically talking HTTP 3 going forward, they will have to have this fallback to HTTP 2 or 1 for those three to seven percent of cases where it can't establish the connection. And it has this silly property then that the blocking of the UDP or throw away the UDP package, that's going to be based on your network, right? Not on the server or your client. So it'll be, you know, you switch down your laptop at home and you bring it to work and then suddenly when you bring back Firefox, it can't talk quick anymore because your work network is dropping it. So we're looking forward to a great new world that we're gonna have to raise TCP connections with the quick connections to make sure that we get the best one in all of the cases. I think a bit of an annoyance, but that's the reality. And I think of course this will, if HTTP 3 is actually a good thing, as I say if because I don't think it's been proven yet. So if it's a good thing, I think this will improve over time because all these organizations will actually help their users to get a better internet so they will actually have an incitement, I think, to actually fix the problems over time. Quick is awesomely CPU intensive, which of course is, as a client, it might not matter much, right? If you're using a little more CPU when you download stuff because that's your browser. But in the server end, we're talking to more than twice the CPU for the same bandwidth, which for the server side, of course, is a lot of more CPU. And I think it's just two to three times the CPU right now. So this is, I would say, a major problem for server implementers to deploy HTTP 3 short term. Of course, I think this is, I'm not really a server guy, so I'm not really into all these things. But this is partly done because of a worse hardware offloading situation because TCP and TLS has been done for a long time and we have sort of optimized this. We have optimized TCP stacks. We have optimized offloading to hardware for all the crypto stuff. And now we're changing all these protocols. So the offloading is really off. I think this will certainly improve over time when we get more hardware offloading when we're more improved software because as interesting enough, UDP is really slow in Linux, which is, yeah, we never really had to work on UDP because we didn't use UDP like this before, so it didn't really matter. But now it turns out that TCP is much better than UDP, which I think is a bit ironic since UDP is so much simpler, it should be. But so there's going to be more work to optimize UDP as well to make sure that UDP delivers as fast and smooth as possible. So it's, of course, it's going to be better over time. Everything is going to be better over time, of course. And a funny TLS layer. Okay. So when TLS, you know, they're designed to work on top of TCP and now this is a new transport protocol. It's not done over TCP, right? It's done over UDP, it's basically its own transport layer. So how do you implement TLS in your own transport layer? Well, in the quick working group, they decided that they shouldn't use TLS records like they do over TCP because you don't have to do it like this. So you extract this TLS messages and you transport those messages over quick. But there's not a single SSL library out there with APIs for this. Well, now there are because some of these guys who are working on quick are also working on SSL libraries. So if you're like Mozilla, the NSS library supports it and Boring SSL supports it from Google and a few other minor, I shouldn't say minor, less widely used libraries also support it. But for example, a fairly popular library called OpenSSL, they haven't even started implementing API for this. They basically wait for the specification to be done and before they start working on this. Those TLS messages are one part and then it also needs other secrets from the TLS layer that the OpenSSL for example doesn't provide API for. So we're in a funny situation here and I don't know if you remember but HTTP2 that was severely hold back the deployment of HTTP2 took a long time before because when we shipped HTTP2 the spec, all the server operators of the world said, okay, how do we enable this? Yeah, we need this TLS extension called ALPN. How do we have that? It was a standard, it existed in OpenSSL in a certain version but an entire world was stuck on the older OpenSSL version. So it took a long time until the server started to upgrade to I think it's OpenSSL 102. So it took a long time. It was a real hurdle for deployment and now we're in a situation where OpenSSL doesn't even have the code yet. So it's not even close to the same level of problem that we had before. We haven't even introduced the problem yet. So yeah, we're a bit behind here. And yeah, and some people are also implementing a library with a lot of different TLS implementations and this makes everything completely more awful. So that's a mess. Right now, right now in Curl for example, I have to single pick one of these libraries that have this support for this API and build the entire thing with that. And if I wanted to build with OpenSSL then I have to build with my own locally patched version of OpenSSL to be able to speak quick, which is a bit of a annoying situation. All user stacks are, oh sorry, all quick stacks are user based, user land. I mean, they don't have to be, but they are and I guess they will basically always remain. Which I mean, this is a challenge more in the traditional aspect that we will see different applications will link to different libraries and therefore there will be a mishmash of different versions and we will have a fun future of debugging different applications working differently over quick because they have different behaviors. And there's no standard quick API either. So if you wanna speak quick you pretty much have to get married to one of these APIs and use that and it's not going to be that easy to change. I'm in Curl with support two different quick stacks to begin with. So there are some challenges and there is also a slight lack of tooling. Well, you can actually use Wireshark already now to monitor most of the quick. So sure, Wireshark is on the game, but of course, I mean, we've debugged, we've seen TCP for a very, very long time, you know, window size, segment numbers, everything. This changes a little bit. So going to take a while, I think. There's the spec is going to ship in July. That's the plan. I'm not sure it will hold, but that's there. That's the plan. So okay, if the standard is going to ship in July, we talk about all this, how's the situation done in implementations right now? When can we try this out? When will Firefox run it? There are a lot of implementations, especially with quick implementations, not many HTTP 3 implementations, which one might argue is a bit worrying since we're going to ship it in July and we can't even do interrupt tests with HTTP 3 yet. At this place in time with HTTP 2, we had a lot of implementations. We could already run them on the internet, we could already try out everything, but we're not there with quick and HTTP 3. All of these companies, basically all those I mentioned in the earlier slides, they're all having their own implementations. Mozilla has one, Google has one. A lot of these companies have their own. I think there's like 15, 20, something different implementations. There hasn't been a single browser release yet with HTTP 3 enabled. Unfortunately, I don't have any news either about Firefox when the support is coming. I know that Patrick McManus has shown Firefox running with HTTP 3. Well, at least quick support. So I know he's been working on it, not that he's still at Mozilla, but anyway. And it says March because Google has said that they are probably going to have HTTP 3 support in Chrome in March. I mean, I guess that's some sort of developer version of Chrome. So, and there's, I mean, the popular open source servers, none of them have even said a word about HTTP 3, so I figured they are also not really here right now. And, Carlos doesn't support HTTP 3 either, but I'm hoping to soon. Again, HTTP 3 implementations are really behind, so I don't, there aren't even many HTTP 3 libraries to pick from, so I have a bit of a chicken and egg problem here. But I'm hoping to get there with curl within a month or so maybe. So then I took out my crystal ball and look at the future and I say that it will take some time until HTTP 3 is deployed, right? And I think it will grow slower because of all these problems and it's not as big of a gain as HTTP 2 was, so maybe it'll just stick to HTTP 2 for a while. But quick is here for the long term, it's going to be protocol for the future. So maybe it doesn't matter if HTTP 3 doesn't get on, I mean get deployed immediately, it'll come there over time. So in the future we're going to see more things in quick. There's a huge number of issues that are sort of marked work on this after V1 is released. So after V1 is released in this summer, I'm sure there will be a lot of persons working on new things to do with quick, for quick V2. So I expected quick V2 to come within years and there will be more application protocols that are long to implement themselves over quick as a transport. Okay, so what I'm trying to do is just to repeat what I've said if you fall asleep. It is coming, so it is always encrypted. It is basically HTTP 2 in sort of feature-wise but it's done over quick instead. Right, there are a lot of challenges. I think we will overcome them, but it will certainly be, there's more work live before summer. Yeah, maybe, sort of, perhaps. Yeah, but I actually think it will happen but we'll see what kind of speed it will happen. I wrote a little document about this called HTTP 3 explained. If you want to read more about that, what I already explained to you now. That's it for me, thank you. Thank you too, Daniel. So any question? Okay. Why is this so CPU hungry? Well, I think a large portion part of that is because of the lack of hardware offloading that we can do with TCP and TLS. So I think a lot of that is we will be fixed with improved hardware going forward. Okay, any other questions for Daniel? Okay, yes. We have the time to arrive. So like in your diagrams, you call the TLS 1.3 inside quick when you throw away most of TLS 1.3 and just use bits and pieces of it. Yeah, exactly, yeah, that's right. And then when you're gonna have vulnerability there, everybody will turn off quick and go back to TLS, right? Well, maybe, I mean everyone is going to implement HTTP fallbacks anyway, so yeah. Because it seems to me like by merging together your encryption and application layer, you're just opening yourself to be more vulnerable to future insecurities in your new protocol. Maybe, but I mean we're always all talking about TLS here anyway. So we're already, we're replacing TCP with TLS for quick. So I'm not sure TCP with TLS is better security-wise than quick is. Okay, just carry. Okay. Hi there, you mentioned multi-path in one of the future potential features. Yes. Do you mind just elaborating on that a little bit? Like as a user might you see that being able to have Wi-Fi and 3G both downloading bits from the same website or? Yeah, exactly, multi-path is basically about this setting up two different paths over the network. So it could use different interfaces and transfer data over both at the same time. And it could actually use the same interface but just to use different paths over the network. I mean TCP multi-path already exists. So multi-path is not a new concept. It's just a concept that hasn't been implemented in quick V1s, it's been said for quick V something else. So that's also, we can't really say anything about how it will turn out then of course because it hasn't been decided. So we'll just have to wait and see. Quick has, I could just mention that that quick already has a connection ID and it's not actually tied to the regular TCP tuple with IP addresses and ports. So for quick already has a really nice way to be able to transfer between interfaces, for example, in a computer without having to do any magic tricks that we have to do with TCP because TCP is sort of stuck to the IP address. Any other question? Okay. It's good that you sort of tried to be far away from each other. Yeah, but this is an exercise. Yes, yes, yes, yes. First of all, thanks to you and thanks to the people from CERN for HTTP first version. There is a fundamental change in the approach of transferring information from TCP to broadcast. Broadcast minds, you don't mind about heroes only and broadcast is mainly known as television. Of course there is YouTube, of course there is a lot of increasing volumes of data through images, through streams, but we have to remind that when you don't pay to access to those kind of data, meaning getting to YouTube, getting to Google, getting to old mainstreamers, you are the product. Meaning the information you transfer by requiring this content, the profiles we get from you within your moves daily, within what you do when you ask for information and when you transfer privacy and security from the lower layers, meaning the equipment layers to the user space to you, you are the added value of the network. But you're not changing that nature by changing the protocol here. You're going to be the same product or whatever you say. I mean, using HP2 or HP3, that's going to change. You're still the same person, you're using everything the same way, you're delivering the same information to the other point. It's not going to change that. It's going to change the amount of information anyone in between can actually see. So you're actually going to reveal a little bit less to the network. It depends how you mind the network. If you mind the network in peer-to-peer or if you mind the network in infrastructure mode with massive concentration of information from the main players, you've just... Sure, but that's not something quick changes. That's a different dimension, right? What you do with your connections. If you set it up to one single operator in the world, that operator will know everything if everyone does that. That's true, but that's the same truth no matter how you do that connection. You can use Pidgeon's too. Because we live in democracies, it is okay. What is your question? The question is to put at risk the whole internet by going to broadcasts. Okay, another question? Yes. Only for... Excellent idea, got to go there, I think. Hi. Is there a plan to have quick in Kernar loan for... It is possible? No, surely it's possible. I haven't seen anyone have that plan. I don't think anyone is going there right now, at least. So... Thank you. So with encryption and everything, that means the transport layer needs to have access to certificates and stuff. So there's two questions I have. One is, what if I want to use different certificates for different sorts of traffic? Because I might want to use different keys. And secondly, what if there are no certificates? How does it work? Sorry, what if there are no certificates at the endpoints? Well, you end up in a situation exactly like with TLS, I mean over TCP. So you have the same certificate situation as you already do when doing HPS. So sure, I mean as a developer you can just ignore certificate problems or you don't. This doesn't really introduce any new problems. We have those problems already. Any other questions? Yes. For Toronto, it's not far away. Hello, thank you for interacting quick. Now when I want to play with it, what would be the best libraries to start with? Like the implementations? What is your betting on? If you want to play with quick, I would really recommend to just find the quick implementations wiki page because it's a list with all the quick implementations that exist today. It's on the... They exist on this quick working group page. You can follow the links there. With Curl, I'm using two different quick libraries. I'm using the Kish library from Cloudflare and I'm using NGTCP2 library, which is from the same team that made NGHP2. So those are the ones I use, but there are many others. The Mozilla one is the most quick implementation. It uses NSS as a crypto. So you can pick your own flavor and there are available in many different languages for different platforms, different environments. You can go there and play around for ready today. Any other questions? Are you sure? Okay. Thank you so much, Lono. Thank you very much. So we'll have the time to relax a little. In five minutes, we'll start again. Okay, we'll start again with the next talk. Because we have the time. Yeah, no, but those kind of debates can not happen in this. Test, test, test, test, test, test, test, test, test, test. Both, one, two, three, four, five. One, two, three, four, five. I don't mind. One of those, it wasn't a question. It was more of an opportunity to rant about something. It's a quick, really mainly based on your language. Yeah. I mean, it's all the interpretation. I think it's going to be very useful now for you. Do you know what it is that's going to happen? Do you know what it is that's going to happen? Yeah, indeed. I know that's been thought, but it's basically just one of those things to work on. Test, test, test, test. One, two, one, two, one, two. One, two, one, two, one, two. I'm stashing my stuff here. Oh, you're a smite. Good, how are you? I saw you. I watched Bayou last night. Oh. But you didn't see? Probably because I had it. I think it's quite awesome. It doesn't look like it. It doesn't look like it. It doesn't look like it. It doesn't look like it. It feels good to see. Can I talk with you about the adaptor for MacBook? You don't have one, right? Okay. We'll bring the VGA to HDMI. Show me your laptop. Are you presenting now? So, before your presentation, we will bring you an adaptor for you. Can you show me your laptop? Can you work like this? Why? Because I don't have battery enough. Okay, get first and I'll bring the VGA to HDMI. Testing, testing. Test, test, test, test. Test, test, test. Test, test. Have test, test, test, test. Test, test, test, test. Test, test, test. Test, test, test, test. Test, test, test, test. Test. Is it an iPhone or a iPhone? No. No, no, no. No, I see fine. Don't worry. Okay, test, test, test, test. Test, test, test. Test, test, test. Test, test, test. Test, test, test, test. Test, test, test. Test, test. Test, test, test, test. Welcome, everyone. A few minutes before we start. Only to resolve the issues for connect the notebook. So the next speaker is...