 I've been here this is the third time I'm here in this the room even if we have different rooms every time but anyway so I've been talking about HTTP and HTTP 2 and I'm here again to talk about HTTP 2 and now from this angle yeah we shipped HTTP 2 right you all run HTTP 2 how is HTTP 2 doing and did it really fulfilled its promises did it work does it work and where it doesn't work what should we do about it and just as a short recap then HTTP over the web rather the web has changed in this way you know the number of objects have increased the number of the size of the data that we are getting from an average web page today has sort of the development is going in that direction number amount of data amount of objects or requests and they're all not all but a lot of them from the same domain this is basically just facts about the web today a lot of objects a lot of data and a lot of those objects are on the same site and we have this head of line blocking problem originally in HTTP one then we need to stand in one of these lines we have a limited number of lines we don't know how fast the line is you know which line in the supermarket is the fastest you know which one is the trainee in the front which is the slow customer in head of you you don't really know we have that problem in HTTP we have that in TCP so I can speak louder yes so the solution to that problem in HTTP one was this guy HTTP to guy so fix the problems we had with head of line blocking and the number of objects or make it make it to be deal better with that large amount of objects we introduce multiplex connections in HTTP 2 and basically then you can set a lot of separate logical streams over the same one physical stream and a large amount of them which basically is 100 in and basically every case so when you want to send one image the just in train that is sort of consisting of a number of frames and you send another image which has also a number of frames and just sort of frame frame frame that sort of yeah they both go over the wire same connection different streams and we'll come back to the just in an IKEA train soon so and that means that when you're the client guy and and the cloud over there and you sort of you ask questions because that's how it should be work like you ask a question the server sends it back ask ask an answer ask an answer but with HTTP 2 we don't need to wait for the answer before we can ask again so we ask for a resource to get it back but instead of waiting for that resource to get back we can ask for more resources and that might some resources might then be slow or not it doesn't matter because we can keep on asking sort of the responses can then be different to pace they can be faster slow it doesn't matter they're all over the same connection and the sort of they can send data as soon as it has has it much better usage of bandwidth and usage of TCP in general so that's that is it should be to in three minutes so okay we made it I mean the RFC 7540 was shipped in May 2015 so it's we were approaching two years since the official of RFC so how does it really turn out to work in them in the real world well basically every server you can imagine now supports HP 2 if you just check some config items somewhere and you know all the open source ones all the really big commercial ones so server side great browser side all all the browsers really that people are using or supporting it I mean yeah there's a tiny tiny percentage of supporting it but that's diminishing and that taken together them server support client support of course that doesn't mean that the entire world has switched to HTTP too but you know we have this telemetry data in Firefox or at Mozilla they and that Firefox can collect if you opt into it and then we can sort of make some we can see how Firefox is being used out in the wild among those who have opted in to this data collection which I should just emphasize it's completely anonymous we don't know which hosts we don't know which users but we can sort of get a hint from by the amount of what is used and we can see that HP 2 is used in 30% of everything that is HP and I would say that is what a lot share and HTTP here is then all HTTP that is HP and HTTPS and as you might remember we're only doing HTTP to over HTTPS in Firefox and actually in all browsers so it's actually more perhaps more sensible than to see how bigger share of HTTPS is HTTP to and yeah that's more than half so I would say HTTP 2 has taken on pretty good and this is of course based on volume then you can deduce that you have the major size the ones you're using the most they are all using HTTP to the Google Facebook Twitter Instagram blah blah all those big sites they're using it so by volume that is way over half but even if you're if we're sort of checking out what sites are supporting on the big internet we can see that 12% of the top 10 million sites so yeah that's not half it's far away from from why that option I would say but it's getting there and if you're sort of looking to the top sites at least quite a few are there and as a more than 50% if for your site if you enable HTTP to today of course you will see that the majority will use HTTP to because as you saw before this the bowser support has been there for a long time so everyone is going to use HTTP to if you just let them right and so that is the groundwork for this HTTP HTTP fixes header line blocking by multiplexing connections over the same physical one and it's getting deployed that's good right it makes things better but did it deliver I mean did the internet become a better place did we sort of did we make everyone's browsing or surfing or internet usage better and is everything more enjoyable now and of course that is a tricky question to just answer to but we can look at some metrics to see how HTTP 2 is doing compared to HTTP 1 looking so that if we look under the hood and when we're looking under the hood it's obvious that we can look at some of the more sort of remote corners of the internet where probably most of us in this room we don't really visit these corners very often but people in third world countries perhaps or with really really crappy wireless networks like different mobile internet networks and so on if we're then focusing this is sort of the average run trip time for Firefox users that we're collecting again in the telemetry so for and this is clients running on the mobile clients running on desktop and if we're focusing on this these guys this 95th percentile they're really really far away from their servers they're almost a second in round trip time that's a second and that's a really really long time so as I said the HTTP protocol is a lot of back and forth every back and forth takes a second so of course we gain a lot by reducing the numbers of back and forth asking a lot of questions at once and getting a lot of answers back sooner it's much much better especially than for those in these really crappy areas of the internet crappy by some measurement and we can also see with measuring other things then in internally in the networking parts of Firefox we have a queue for outgoing requests basically outgoing HTTP request we want to ask for these things they want to they want we want to send out these requests but they're in a queue we're waiting for a connection or so sort of an availability to send off this request and it can be blocked by basically we don't we can't open we're not allowed to open more connections or we're blocked by that's something else internally so and then we can measure how long how long is the average time for request sitting there quite a difference so you would see that again if you're looking at these I kept up the part of this table but if you're looking again at these sort of the worst cases of the internet the ones who had suffered the most you can see that it's the drastic improvement we're no longer basically no longer waiting for outgoing request we can send request much much earlier and 95th percent that I was seeing I mean that's what almost a 100 times improvement so of course with a reduced number of round trips and much much reduced waiting time it will be a drastically improved experience for anyone on these networks so I would say yeah sure looking at these numbers and they again not many of us are going to be there regularly so we might not have noticed but there's a certain amount of people that has suddenly gotten internet a lot better when they're using HTTP to another way to look at this same data so okay this is how it looks in the good side and the bad side sort of this and how it is improved in the in the crappiest surroundings right that data too so how many requests are hanging in the queue longer than 100 milliseconds which is sort of like you know you extend that is we want to have that as small as in this little as possible with H2 they aren't at many requests actually hanging for very long right so another way to look at this data is as a human at fast lead made some great tests by running browsers in in a test network where he would induce network packet loss basically losing packets as you lose packets in the real world that means you don't lose packets like I mean what sort of that made to be a realistic simulation and then we can see that okay here's a complicated graph I just we don't need to go into the details but these are basically this is tying in X and we can see H2 H1 in Firefox we can see H2 H1 in Chrome and now we'll take a moment to see that Chrome is much slower and then we move on so there's no packet loss scenario they'll deal case for H2 nothing is lost H2 is just faster overall going into really crappy network surrounding 2% packet loss which is quite a lot by network standards you don't you don't have that I mean this is not your ordinary Wi-Fi at home you don't you wouldn't want this but a fair amount of traffic in the world is still suffering from this so we so this is still a scenario that is happening and we need to sort how does Firefox how does H2 and H1 compare in this nasty surrounding and if you're if we've been looking at the same colors here we see that H2 performs much worse the Firefox H1 is there Firefox H2 is there and this is then worse and clearly introducing packet loss at the rate of about 2% H2 does not serve as good as H1 that's not as fun right or shown as that no not good but what do we do about that or rather why is that why do we have that problem just because of some packet loss we have one thing we write H2P2 everything over one connection we have let it go streams over one physical stream one connection versus the previous H1 with six connections having packet loss there there's a much bigger chance that one of the connections are actually sort of surviving when you get packet loss so you have a well six times larger chance that your connection is going to survive without sort of waiting for a packet loss while if you have one connection one packet loss means everything is halted and until we find the packet again so that introduces a fun head of line blocking TCP level wise lose one packet everything is waiting you know if you're getting 100 images from the website you lose one packet all the 100 images are waiting for that packet not ideal TCP networking school class one we have an IP layer we have a TCP layer and we have TLS and we speak H2 right but the dip the dip the Justin train and they're key a train no awesome you know all the different each two frames sent over the network like this it lose a packet yeah it just happens to be very underlined with everything so everything has to stop because everything is built on them have sort of being together in this so we need to fix the TCP head of line blocking problem we can't have 99 images unrelated block because one image single packet has been lost so we introduce a non-blocking TCP TLS H2 right easy easy PC so we need independent packets then so that we can lose a packet and we can rest can continue and so they still need to be streaming well then so we know that okay we lost one packet but all the other ones that are related to other streams they can continue just this stream is gonna be halted right and then we need to sort of yeah but we need to send resend that packet right like TCP style but this could be introduced with a new protocol like whitehead TCP then if TCP isn't good enough and you repeat you know you repeat doesn't really retransmit anything so we could invent a new protocol but no we cannot because that's the way the internet works today we never introduced protocols because we have so many crappy boxes everywhere that just says no so that's been tried and tested many times before there are many new protocols but they really they have a really hard time to get deployed because there are so much crap on the internet blocking that so we and even then even if we could fix like we could fix TCP to do this right how's your Windows XP doing and even if we would ignore those two digit users on Windows XP percentage wise doing things in TCP is still kernel jobs you know fast-paced all the kernel development TCP right yeah no it will take ages to get anything into TCP too so even if that's just work too and we should do that but it takes a really long time introducing quick so this is basically all of that over GDP and by doing that we don't have any TCP or head of line blocking anymore and of course we need to sort of implement congestion control and we can do it differently than TCP and in quick we also sort of introduce we can remove the certain restrictions we had all day in TCP it's tied to two those five identifiers in IP you know source address destination address important number blah blah blah we don't have that in quick we can move them across interfaces easier so basically oh yeah Google has done this and it turns out that UDP is not as problematic as was sort of what we were for that now switching to UDP is gonna break half the internet right is not gonna work because there's so many boxes that I'm gonna not allow UDP like this because we have never used UDP like this before we use UDP for you know timing and then some other tiny tiny things well some like video stuff but we haven't done it that this year but Google proved it working so basically then not working school these are all aligned sort of losing one packet there everything is aligned we just wait for that just in packet the IKEA train can continue we don't have to we don't have to wait for the rest of the stuff but of course this isn't sort of easily done then the the quick job I mean Google has deployed is that they have run this on the internet if you use Chrome and Google's server you have used quick for I don't know a year already or so so they have sort of proven that this work it works to deploy protocol over UDP like this so the work has started in the ITF sort of HTTP to style to get that protocol into the ITF and make a standard out of it there's a massive interest and things are being changed a lot of things are being changed actually in the protocol so we'll see what happens it is sort of a condition for it being adapted in the ITF is that it will also be made as a transport protocol more available to transfer most often just H2 yeah there was an interim meeting in Tokyo last week and there's going to be the IUTF quick which is going to be quite different than the Google quick even if it's going to be the same principles like speed H2 they bring in the Google quick is going to become an ITF quick they're going to be the same in principle completely different on the wire and sort of the specs maybe maybe maybe early light tests mid this year from some big vendors a matter of time but this is not going to be H2P3 but it's basically H2P3 right it's next step but it's not called H2P3 and could have been a TCP2 perhaps if it's becoming a transport protocol for other things than H2 but it's not and I like this picture the Google guys especially appreciate when I call it TCP2 no it's not gonna be that it's just gonna be quick so then a fast roundup is that H1 wasn't really optimal and H2 is binary multiplex fix a lot of these problems it is getting widely adopted and used everywhere especially then sort of if we can buy volume by buzzers it makes sites faster I didn't really say that but it does and quick is coming really soon maybe and basically H2 frames over UDP that's the sort of that's the summary of everything you need to know thank you we'll have like five minutes if you want to take some questions yeah sure and we can do some quick questions before I need you yeah if you have a question or two we can handle that yes please I'm gonna I'm gonna stop here first sorry but I'm sorry please try to be quiet the other ones if you don't have a question hi any supportive curl for quick sorry any supportive lip curl for quick I will support quick in curl not today no but today quick is sort of in a flux right now so what is quick today that the quicker the quick chrome is using today is still Google quick and Google quick is going away and going to be replaced by ITF quick so there's really no point in me going forward to do Google quick when ITF quick is coming and I require ITF quick is still not really solid so he'll come but yeah sorry I'm wondering I mean the UDP doesn't fix fragmentation of packets in the sort of any servers or anything in between has there been any work on the test how from a patient effects UDP obviously TCP as well I'm not sure I'm following the question I mean UDP it works like this and in in in a real-life deployment browsers are going to raise TCP against this UDP versus in case in those few cases where UDP doesn't really work D يأس بحر props Iraninent what happens with old websites that used to be sprouting of the images still I mean there are certain things that you could reconsider doing when you're switching to H2. Okay, so now we don't need to do that anymore, describing... It's not that easy to say yes or no, but yeah, maybe. Thank you. Where the hat during the question? You didn't really talk about it unfortunately, but obviously you use quite basic 4D correction for packet loss. It's very basic compared to other IETF documents such as RRTP. Do you know if there are any plans to use much more advanced error correction methods? What I think is that forward error correction is out. It's out completely now, okay? Because they did a lot of work on that within the Google effort before, and they sort of deemed that not sufficiently good to continue. Even on high latency links? Yeah, because the waste in bandwidth is really hard to make a trade-off, so I'm not the one to really answer the question more about the exact details, but I know it's been sort of good to decide. Thank you.