 Okay. Hi, folks. So I'm here today to talk about how all performance is conceptualizable as front-end mobile performance as a sort of catch-all category and a target for optimizing all the experiences of visitors for sites. It also really comes down to the business case for these things because people don't build websites just because they're passionate about building websites. They build websites because they actually need to serve a purpose. These websites need to acquire visitors. They need to convince visitors. They need to delight visitors. Whether you're trying to sell products or whether you're trying to sign people up for volunteer work, it's important that people have a great experience finding you and once they arrive on the site. And increasingly, that means on mobile devices. The latest metrics I've seen from some of our partners have been numbers as high as 50% of visitors arriving now from mobile devices to these sites. And these mobile devices are a challenging experience to say the least because we're working in a confined situation in terms of bandwidth, in terms of processing power, in terms of screen real estate. So I'd like to start off with why it's so important to treat performance and security as a business case. How many people in here have HTTPS deployed to their sites? Okay, that's a good number. This is better than the internet on average. The internet on average is creeping up. I think we hit over 50% around the last year or two largely thanks to tools like Let's Encrypt, services like Cloudflare that are allowing people to really get HTTPS on their sites in a way that is more cost effective and efficient than ever before. And it's becoming a necessity more and more, not just for security reasons in the sense of protecting people's personally identifiable information, but also in terms of actually being able to use the latest features in the web. If you want to use things like geolocation or send notifications to users or look at the orientation of a device, increasingly those are now locked behind HTTPS in browsers. Partly as a carrot for people to implement HTTPS and partly to protect the privacy of users where their sensitive information is possibly going over these connections. But it's also a risk as sites to deploy HTTPS because it needs to be deployed properly to ensure good performance. And that's more important than ever on mobile devices. And also, most of you probably already know this, but Google has been using it as a ranking symbol, a signal since 2014 as well. PageRank cares a lot about time to first bite on your pages. Your mobile users are having their pages ranked for them by search engines like Google along several criteria now. One of them is time to first bite, which is how quickly you deliver the first bite of HTML to the browser. In a way, this is somewhat of a kind of in the weeds metric, and I've seen some good arguments from people about why you shouldn't obsess over this, and that's totally true. But you should also think of it as a way to kick start a race, that you can't finish a race before you begin it, and you can't start downloading all the other assets of a page until you get this onto a device. Google is pretty sensitive to this. I'll talk about their numbers in a minute. And you can see here how like all other things being equal, PageRank drops on average based on the performance of pages going down. But on mobile devices in particular, your devices are over these mobile connections and are already at a huge penalty in terms of being able to get access to that first bite when going over these mobile networks. So in terms of user experience, this is particularly important. And Google has also really raised the stakes this year on how ranking happens for mobile experiences. When people are searching on mobile devices, they're now using the actual simulated time to load the page on a mobile device as a ranking signal. So not just getting the first bite, now they're starting to look at when it first paints the page, when it first downloads critical image, CSS, and JavaScript assets. And even for JavaScript, how long it starts to take to execute critical things on a simulated mobile processor, which as powerful as the processors in our phones are, most mobile devices still try to throw them down as much as possible for energy reasons. So you're still up against the battery of the phone, even if the processor's maximum performance would be quite good. This affects whether people actually end up on your site. I like to sometimes talk about this as the silent top of the funnel. Because how many people in here use Google Analytics on their sites? That's like everyone. Google Analytics only starts tracking, or actually any analytics package that's based on a JavaScript stub that basically pulls tracking data and then marks the user with a cookie relies on loading JavaScript, which is often one of the last things that's happening on your pages. In fact, if you have Google Analytics deployed properly on your site, you're actually probably loading it fairly late in the rendering process of the page, like if you're putting it in the footer. What that means is your pages load a little faster, but it also means that it's completely hidden if a user doesn't click through to go to your site or they don't wait long enough to actually have the page render. So you lose these eyeballs and it's not even going to show up in your tracking results because they didn't even start getting tracked. So in terms of how much does your ranking on a mobile device affect your click through, huge, huge amounts. How much does the loading time of your site on a mobile device matter for someone actually waiting around for the page to load? Enormously. How many people in here have like a giant graveyard of other tabs in their mobile browser? Like if you pull up your mobile browser, like do you have at least 20? Okay. So like I sometimes have to go in there and clear these things because I just keep accumulating them and part of this graveyard of tabs is because these are pages I started to load and then I just didn't wait around for them to finish loading and I just switched to an app or switched to another web page and like we can even kind of anecdotally tell from our own use usage of these devices how much it matters that mobile performance really delight users. This is actually a pretty hard target. 2 to 3 seconds for a page load. It's a really hard target on mobile devices because of all these penalties that I've mentioned that you're going into the race with. So this is sort of how I like to think of that silent top of the funnel in the sense of they have to search for your stuff. They have to find your stuff. They have to actually wait for the page to load and then you're finally tracking them with something like Google Analytics at the bottom of this particular funnel. So I'm going to talk about strategies for optimizing these top two things. The first one sort of is a side effect of optimizing the second one. If you, the same things that delight mobile users are the same things that are going to get you good rankings in Google now, they're trying to make the rankings align more and more and more with the actual tangible aspects of user experience and align incentives that great sites get ranked highly, not sites that play tricks on Google. So mostly I'll be talking about the second one. So it's also important to understand when you're going into these sort of optimizations where your target goals are because you can also over optimize. I sometimes have people come to me with like a list of slow queries or slow modules or slow pages and they have their top like 10 list of slow things, but they may be the slowest things on the site, but that doesn't mean that they're slow enough to be worth optimizing. Some sites are actually pretty fast and meet these goals pretty well. So I like to throw out these measures as things that you should treat as benchmarks. If your time to first bite is under 500 milliseconds, that's kind of if you go in your browser and you actually click to load the page and it comes back and like the HTML in the little waterfall graph is starting to come back in less than 500 milliseconds, you're in pretty good shape there. 400 or less is even better, but there's not much evidence that getting better than 500 is actually going to do much for you in search ranking or user experience. The time to first paint is a much more soft metric because it's hard to measure that as precisely. Some browsers measure it by time to first paint being when you stop seeing a white screen. Google for their own search ranking uses a measure called first meaningful paint. They basically are looking for when not just it starts throwing at least one element on the screen, but when the main body of content as they've analyzed it is now visible and navigable by the user in the sense that they can see, like if it's a news story, they can see the article, they can scroll through the article. They're looking for something like that. Regardless, at least for the subjective experience of users, hitting under 2.4 seconds is important for that time to first paint for the subjective experience because that's when people start getting losing their patience. It's amazing how impatient we are actually with websites compared to most interactions we have in the world, but it's pretty good evidence that it's about that. It falls off very rapidly after that. If you're doing much worse than that, the impacts get pretty exponential in the sense that as soon as you hit 5 seconds, 10 seconds, something like that, you're just going to have a drop off rate that's going to be way more than 50% of users. Of course, you've got to keep stuff online. If it's not there to load, then none of these other things matter. Optimizations that actually break your site are bad optimizations, but don't create unnecessary work for yourself. I just wanted to put that caveat in there. In a world where we now have to deploy HTTPS to sites and we have to optimize the performance of these things, how do we actually put these reluctant partners together? I call them reluctant partners because when used together properly, you will end up with a better experience than just no HTTPS, even from a performance basis, but if you do it incorrectly, you will tank the performance of some of your site, or at least make it worse. There's a great single page website in that 2010 fashion of is whatever happening yes or no. There's an is TLS fast yet page. It's a neat little site. It goes over a lot of the history of why TLS has been slow in the past and what optimizations have happened for it. I like to say yes almost with an asterisk because it can be, but it has to be done properly. We have all the tools necessary today to make it fast for everyone. It ultimately comes down to the number of round trips times the round trip time. The number of round trips is surprisingly large for loading a web page. It's going to be at least three to four round trips on a modern web stack. The round trip time, if you're on something like a mobile device, it's just going to be awful. In some ways, it's like you're loading a site across the ocean every time you're accessing a page, even if you can access it from geographically a close location. So this latter figure can get really high and it should never be underestimated. The figures I'm going to talk about because they are mostly, they're a little more stable, will be latency figures based on distance in terms of across continents, across oceans. Much of that can be used to derive equivalent conclusions for the mobile experiences as well because it's basically like someone accessing a site posted in North America from Europe or North America from Australia when someone's accessing it over a mobile network. So what really tanks the HTTPS experience for users when it's not done right? In many cases, it was negotiation CPU overhead. Modern CPUs have a lot more cryptography primitives built into them, including on mobile devices. This used to be something that they would dwell on quite a bit. The negotiation CPU overhead continues to be relatively high. And the active connection CPU overhead is basically zero now. In a sense of once a connection has been negotiated, what is the actual overhead of continuing to communicate over HTTPS? The other things that have gotten knocked out are that it used to be two additional round trips versus just loading the page over HTTP. So if your latency over your mobile network was something like 100 milliseconds or 50 milliseconds, that would add an extra 100 or 200 milliseconds to the page, to the time to first byte and the overall page load just based on adding HTTPS in the past. Also, many of the things around round trips are being confronted with technologies like Qwik and TLS 1.3 and some of the TLS 1.2 false start stuff. So I'll go into how these stacks apply to different servers, server configurations and CDN configurations, but they massively affect your mobile users because they have outsized impact on latency and overhead. So old versus modern stacks. How many people here deploy their own servers? Okay, so we still have like 10, 15% or so. If you're deploying on something even as seemingly fine as like Red Hat Enterprise Linux 6, it's actually going to be on something closer to this old stack. You're going to have an extra round trip. You're going to have a round trip for TCP negotiation that's unavoidable. You're going to have two round trips for TLS negotiation that half of that is avoidable and HTTP is going to be one additional round trip. And then the browser is going to have to make additional connections to the server for all the parallel connections that it's going to want to have. And that will only occur after it actually gets that initial connection set up and knows that it needs to pull down these other assets. So that adds another multiple and doubles all of these other latencies because it basically is another series of connections getting set up in parallel. So you have about eight round trips. So whatever the latency is, if I ping that server, I can multiply that by about eight. And then that gives me the absolute minimum time to first bite over this sort of connection, even if I had a varnish box sitting right behind this ready to serve the page. If you have a more modern stack, this is something like you might be able to do on say Red Hat Enterprise Linux 7 or a modern CDN. You actually reduce it to about three round trips for setting up that connection because one of the TLS round trips goes away and it uses HTTP2, which allows it to aggregate all of these requests and tell the server here is everything I need. Just pump it back to me over all these channels. I don't need to set up additional connections that independently get negotiated and set up. So you can make incredible benefits. You can yield incredible benefits to your mobile users just by moving to a more modern stack. There's also kind of a future stack that I hope will come out this year. I'm not necessarily super optimistic anymore about this landing this year, but if we actually start seeing more standardization around TLS 1.3 and HTTP2 and Qwik. Qwik is a technology that knocks out that initial TCP round trip. So it ends up with you being able to actually make a web request for a page and potentially even having the assets pushed to you in two round trips. This is a pretty big benefit when your goal is to get sub 500 millisecond time to first byte and you're dealing with latencies that are around 50 or 100 milliseconds. So once you start adding in CDN models, things get a little better for users. I actually think that it's really hard to deploy HTTPS in a highly performant way now without a CDN. And here's why. So that time I was talking about on the mobile device of like your 50 or 100 milliseconds for talking to like getting past the mobile network and onto the main internet gets even worse once you start adding in all these backbone connections back to an origin. So let's say I have, let's say I'm in Texas, I'm accessing something that's hosted in Virginia, I might have a latency of around 30 milliseconds at an end. If I'm accessing a site, if I'm a European mobile user and I'm accessing a site hosted in Virginia, my round trip latency might be closer to 150 or 200 milliseconds. What's good to do is if we can't change the mobile network itself, but we can make it so that as soon as it exits the mobile network, it hits the content and negotiation as quickly as possible. So what putting us, what putting HTTPS on the CDN does is it pulls the negotiation and almost all the other operations that are occurring closer to the user. That let's say I'm in Paris, I'm on a mobile device, my mobile latency is 50 milliseconds to going through the cell tower, through my mobile ISPs router and then onto the main internet. But if I can actually hit a point of presence on a CDN to negotiate HTTPS within Paris or at least France or at least continental Europe, then I can then I can actually cut out probably half the time that I would otherwise experience if I also had to make the trip across the Atlantic. So it's all about these round trips, both the time they take and the number of them. So pulling things closer to the user is really good. So if I take a big, if I take an assumption that your that Drupal is rendering your pages in 200 milliseconds, which is really, really aggressive, then I'm going to show you some kind of models of how you can hit these kind of performance goals with these different deployment models or how certain deployment models intrinsically break them. So if you're on, if you're on the same continent and you have just a very basic server stack, you know, that sort of rel6 sort of stack, it's almost impossible to actually hit these goals on a modern network with modern mobile devices because even if, so if you hit your page cache in say Drupal or Varnish, you can kind of sneak in under it. If everything goes perfectly, you can sneak in under it. If you start missing your cache, you get the, you get this extra time here. And the reason why this time to first byte goes up by 245 is also the extra TCP negotiation back to, I think that's from actually accessing the TCP connection back to origin. But if you have a modern stack and you actually have these points of presence around the world, if you had, this is based on say a desktop device here because mobile network is just all over the place with the numbers. You can have a time to first byte that is going to be sub 50 milliseconds if you actually have everything go perfectly and you've bought yourself enough time to actually have Drupal renders and pages without blowing your kind of performance budget. This means that you can customize pages. This means you can provide compelling logged in user experiences without actually breaking this budget. It gets much, much worse as you get more and more distant. And this case is getting closer to what a mobile user experiences where going across an ocean is basically the added latency of accessing stuff on a mobile device. So it's basically your budget gets busted before you even start having Drupal render a page with with the old stack on this sort of setup. Because you're spending so much time going back and forth between the device and your server before it even really requests the page that it's it's about it's over 500 milliseconds before they can even hit a varnish cache at a single location. If you have a modern stock, you buy yourself time. Same, even more if you're talking like APAC North America, this could be something like a more of a 2G network experience. If someone, if you care about users and developing countries, if you care about people accessing your site globally, these things matter. You actually, it's almost impossible to actually even hit your time to first paint at a good level on the old stack here with this setup because you're spending over a second, you're spending about a second and a half before you can even start getting the first bite of content back to the to the browser, even hitting something like a varnish cache. And as with the other cases, if you use a modern stack with a CDN, you actually buy yourself enough time. You can even, you can even have Drupal render a page because it actually has enough headroom available because it's negotiating with something in the neighborhood. In this case, the time to first bite would, for a mobile device would, the baseline would be going up for the mobile stack versus a desktop experience. But it's, you would still be able to hit your performance goals. So it's pretty dramatic. This first bar on the bottom, just above the zero is the 500 mark, just in case that's a little too gray on the screen. So it's, there's only one way to consistently actually meet performance goals and that's actually deploying your stuff on a modern stack. And if you, if you're not, all the optimizations you could possibly do on the back end are not going to make up for, for a bad deployment strategy here. So we've been talking a whole bunch about getting that first bite of the page back. How do users actually experience the loading of the page once they get the page back? Like how do we actually hit that 2.4 second goal? How to, and this is really what, what the lighting users is about because this takes into account time to first bite as well as size and bandwidth and CPU time. So one of the classic deployment strategy, one of the classic deployment strategies to get some optimization in was moving static assets to a CDN. So moving static assets to a CDN is almost the kind of 90s or early aughts way of, of, of working with a CDN where you take your images, you take your JavaScript, you take your CSS, you put that on a CDN. If someone looks at the source of the page, you see these weird like Akamai sub-domains and other things on the site or you know what it looks like. That's a good boost to performance, but it actually doesn't get you as far as you really need to because that still means that you have to, that your users have to go all the way back to origin to negotiate the initial TLS and get the initial page. It's sort of like opening up a corner store in everyone's neighborhood with the CDN, but then forcing them to go downtown to get their shopping list approved before they can go back to the corner store. Like they can, like, because the browser has no idea what to download for these other assets as long as until it actually gets that main page. So all of these performance constraints I was talking about before cannot be solved by simply moving your static assets to a CDN. You really need to move the page itself. And that has some really multiplying benefits because when you start working with technologies like HTTP2, one of the things that you need to consider is that you can also push down assets to phones, which means that they don't have to make a round trip to actually get those assets. You can only do that if it's over the same connection. It's impossible to use a separate CDN from your page loads and still push assets to the mobile device because the CDN won't know what assets to push because it doesn't know what page is getting loaded. And then if the page is loading from the CDN itself, it can actually know here's what CSS, JavaScript, et cetera, to push down. If you're on a platform like Pantheon that's built into it with the integration we have with Fastly, there are other CDNs that support that as well. There's a neat module on Drupal.org that actually will cause it to Drupal to put the right instructions and headers to cause the CDN to push these assets down to the device. And then what the device will say is if it already has it, it'll say thanks, but I already have that. So it won't be too bad if the user already has these things. But since so many page loads, especially on mobile devices, are just first page and subsequent pages are a bonus if they load those as well, many people are coming to your pages without having these assets loaded. So optimizing the first page load experience may be sensible. So the waterfall graphs and tools like Chrome are pretty great for examining this sort of stuff. If you, so this is not, this breaks it up a little bit differently than the browser would in the sense that I'm breaking out here the TCP connection, TLS negotiation, HTML response. These first three things would be grouped in one bar on something like Chrome. Actually, well, actually, the HTML downloaded would also be grouped in that first bar. So the first four ones would. And then these additional TCP negotiations and TLS negotiations, you would often see on the left side of a whole bunch of assets where what browsers typically do, if the server they're connected to doesn't support H2, is they'll start up six simultaneous connections and then basically try and multiplex over them to pull down all the assets. So it'll negotiate all these additional connections. This architecture actually means that it's another penalty for separating out your CDN from your pages. Because if you separate that out, then the browser now has to do another DNS lookup for the CDN, now has to do a TCP connection of the CDN and now has to do TLS negotiation with the CDN. So you've created an entirely new phase of delay in the browser loading the page by separating out the page from the assets in that traditional way. This is that sort of model here where let's say you're using a traditional style CDN that has the static assets. You can get some good times, but you can't really compete with actually throwing everything together. Because there's nothing that really compares to this rapid fire result of talking to basically one server and having a one stop shop for everything that is in the neighborhood. So we've done a bunch of analysis on this at Pantheon in terms of how this affects real world sites. So the effects of throwing about, this was about 300 sites on the platform that we did this analysis on. Because when we integrated a CDN platform wide last year, we had an unprecedented opportunity to research this because most people don't get to do this statistically over a bunch of sites. Because most CDN companies have a selection problem where the customers that choose to buy the CDN are not an average site. Because they're self-selecting, they have a budget for it, they want to do those optimizations. In this case, we were able to actually select about 300 random sites on the platform. And we were able to just simply toggle it over from using varnish with TLS termination in Chicago to running on top of the global CDN as part of, as integrated with the platform, which runs on Fastly's network. That means about 50 pops around the world. That means that the content is getting cached in all those places. So here's what we saw. We saw about a 30% drop for loading sites. If you go from West Coast to Chicago versus West Coast to the closest pop, you see about a 30% drop. And it gets more and more dramatic as you see EU, which I think was from Frankfurt and Asia, which I think was from Singapore. So you can actually see that we got a median time to first paint across the 300 sites to drop below a second simply by changing the network that they were deployed on and the way that caching models and TLS worked. That's pretty amazing. Because this is, in the case of these second and third categories, it's the difference between someone abandoning and delighting, providing a delightful experience to users. This was the distribution we saw. So there are some outliers. This is basically a curve where we just sorted the results along each curve. That top curve is the legacy infrastructure from Singapore. And the dark line of each of them is the next generation infrastructure that is based around deploying HTTPS on the CDN and having page caching distributed worldwide. So you can see that, even though I was talking median results before, basically everyone benefited across the board. Even the worst sites actually benefited because even their situations were improvable by switching out the network. So I want to provide some updated advice on best practices because I often get questions around like, should I do this or should I do that on the back end, configuring a site on these sorts of infrastructures? I mentioned that separate CDN domains are outmoded. Don't use them. Use a proxy CDN. Use something like CloudFront, Cloudflare, Fastly, Akamai, Edgecast, any of these things that actually proxy it back and can cache the page itself. It reduces DNS lookups. It provides a one-stop shop for your users. It's essential for providing great mobile experiences. Don't use a separate host for assets that's related to that where some people, even that they had the same server, they would do this old trick where it would be like, oh, I have six domain names for my assets, even though they're all coming from the same server. Now I can trick the browser into making more connections. But that creates a problem because it's an optimization for the past. Almost all modern browsers and almost all modern mobile devices support H2. Also, HTTPS is now a benefit, not just because it's possible to deploy quickly. But most browsers will only use H2 on the site if there is HTTPS. It's one of those features that they've locked behind deploying that. So it's a bit of a carrot there. So focus your mobile testing on mobile. The new analysis tools for sites are really focused on this. And that's actually great. I don't know how many people have used the new test with Google tool. I'm seeing about 10% here, 20%. That's an awesome tool. It runs on the same engine as web page test.org or something like that. But it provides it in a much more digestible format. And it provides really interesting data around how much the network is affecting it versus how much the decisions of the structure of the page and the website are affecting the experience of mobile users. It'll give you two numbers. And I often like to subtract them. And they don't invite this, but it's actually a really important thing to do. Let me actually just run this on something. Here it is. It always sends me to the ENGB one. I don't know why that is. Oops, not ENU. So if I did like, this has just been not working in my browser. I guess it was that. So what this does is it will take the site. And this is unlike any speed test you would have used before. Because what this is actually doing is it's limiting the bandwidth to connect to the system. It's to a 3G style network because they're showing that even if you see a fancy like 4G LTE in your indicator, the real network performance you're getting is actually closer to what the kind of ideal 3G connection would be. So they limit that. They throttle the processor. They format it on. They render it in a mobile browser engine. They probably are using Blink because it's Google here. They are actually checking for other accessibility concerns for the site. And they are also comparing it against other sites in the same category, as well as providing very concrete suggestions for the improvements you can make to it. So when I do this as a demo at the booth, I usually like to do it like a cooking show where I'll have the tab pulled up on one thing. And then I'll have the other tab that has the completed result. And I'll be like, and now I'll put this in the oven. That's what I should have done. Anyway, it's a great mobile test The top number that it will give you is the total load time that mobile users are experiencing and then your kind of bounce rate from people being too impatient to wait on that. You might be shocked what Google thinks your bounce rate is. It's probably going to be at least 30%. Unless you've specifically already gone through this process. The other thing they provide at the bottom is another number, which is how much page load time you could save by doing things like optimizing images and optimizing other aspects of the content of the page. Now, this number is interesting for two reasons. The first reason it's interesting, it's just for the reason it's listed. Optimizing images, optimizing these assets on pages is really important. But the reason I think it's really interesting is because you can subtract the two numbers and basically get an estimate of how much penalty you're paying for the way you're deploying your site. Like if you ignore the content of the site, how much of a tax are you paying for the way you have HTTPS set up, the way you have your server set up, the way you have your CDN set up, the way you have your caching set up? Awful. Anyway, I don't know how to actually accelerate that. I think a bunch of people might be using it actually down in the main hall or something like that because I know that we've done some demos on this infrastructure. So I'll just hop back to the deck. So use good image formats. Google's analysis tool will suggest WebP, although that's kind of a weird thing in the sense that some devices support it and some don't. So you have to do dynamic selection of the image format if you want to use that. The one thing that you can also do is rather than caching pages for short periods of time in the content network, often you know you have that dropdown box on the performance screen in Drupal that tells it how long to store a page. There are techniques you can use that actually allow you to store the page in the CDN network much, much longer and then only flush it out when things change. I know we support that on Pantheon. It's like we have a model for that called the advanced page cache that you can install. And then it basically tells the CDN here all the ingredients of the page. Like this page had node two in it and block one and user data from user three in it. And if any of those change, I want this page to be flushed. If I save node two, I might flush the block derived from that something on the front page, et cetera. That actually allows you to keep the content fresh as well as maintain really high hit rates in these CDNs because as much as you can actually just provide a one-stop shop for these users as soon as they exit their mobile networks, they're going to have a better time. If you're running your own server, you should look into some of this TCP congestion control stuff, especially for mobile users. One of the algorithms is called BBR. One of the problems with a lot of mobile experiences if you don't use the CDN is that the origin server will send traffic down to the mobile device. But the mobile device network is slower than the backbone network so it will get congested. And then the server will back off way more than it should, way more rapidly than it should. And you'll get the staccato pattern of basically sending data and backing off. BBR is an algorithm that is developed and available in basically all the Linux kernels. And you can enable it and it will cause the kernel to back off in a much more strategic way for sending data down to mobile users. This also matters for home broadband connections. All people's connections are slow compared to data center connections. So backing off appropriately on this is good. H2 can be a mixed bag. H2 I think has especially benefits on mobile connections if you can use push with it because then you can basically pre, you can send data down to the browser for assets before it even knows it needs them. So I think that benefit is pretty compelling. In the future, so I tested, so I originally came up with this theory that you may not need aggregated CSS or JavaScript with H2 and some people at my company tested that theory and I was wrong. So I'm here to tell you that despite the fact that H2 parallelizes downloading all these assets, it will still cause a substantial performance penalty for your site to not aggregate them. So at least for today, keep your aggregation going, especially for these mobile devices. I hope that with the advent of technologies like HTTP Quick, that we can stop relying on these kind of crutches that are helping us deal with these performance issues. And also, in terms of last mile improvements, I think that neat stuff is coming down the wire. I mentioned Quick before, that runs HTTP over UDP to avoid one round trip. Pushing HTTP to down to things with cash manifests is this new thing that is built into some tools like the H2 Oproxy, if you're like rolling your own stuff, that's neat. What this basically does is rather than just pushing assets down to something like a phone and it's saying, whoa, I've already got these things, it will actually try and guess what the device already has based on what's been sent before that device and what kind of a cookie suggests that the device probably still has cashed. So it'll just try and only send things that it thinks it needs. And there's also a new compression format that is gaining some traction. And this would also help with us relaxing some of the aggregation stuff. So Brootly is a compression system that allows us to share these compression artifacts, what would be called the dictionary in the, I'm technically in it, over multiple requests. Like right now, when you GZIP things, the compression is only effective on a request by request by request basis. Like the data that is used in one request is totally unusable for another request. And Brootly changes that. It allows multiple requests to share data in a way that allows, say, if you have five different CSS files, right now they will compress far better if you combine them into one and then GZIP them because it'll all share the compression data. This allows sharing the compression data without aggregating them. So if we get quick, and we get Brootly, like I look forward to a day that we can actually turn off some of these things like aggregation because it would be nice. Anyway, and with that I'll open up to questions. This is a really great talk and I appreciate it. So I think implied in putting the whole site behind the CDN, the origin pull CDN is letting the CDN do a lot of the logic right that we put in like a varnish config file today because it has to decide what to pass through to the origin and there has to be logic at the edge, right? It helps. The edge will even help if it doesn't have logic though because if it negotiates TLS with something in the neighborhood and TCP with something in the neighborhood then you've, even if your CDN literally did no caching whatsoever and just forwarded requests back to your origin, you would still be a limitate, you would still be reducing the duration of multiple round trips by possibly one or 200 milliseconds. So you can buy yourself a lot of time even without hitting in the CDN but of course if you hit then you don't have to go back to origin. Right, yeah. And then the HTTP2 improvements there, a lot of that is still sort of half baked and inside of Drupal itself, right? A lot of that HTTP2 speed up is the CDN doing it properly, right? Because Drupal like for instance isn't gonna build manifests and all that kind of thing. So Drupal, it's possible to use H2 with complete ignorance of the origin because it's just a protocol change but it's also possible for the origin to add extra headers to do things like telling the CDN or proxy to push content. So there's some, it's probably about 50-50 whether there's a benefit without any origin push on a mobile device and there's probably more benefit than not to having it if you add that but adding the origin push stuff or edge push stuff requires adding a module to Drupal. It doesn't do that out of the box. Hi, David. Hey. Great talk. I think all these things about optimizing at the network level and back end time to first byte all of that is fantastic. I'd also just encourage everybody to learn about what happens after it gets to somebody's browser, the front end performance optimization because once you get this optimized there's just this whole other level of optimizing the rendering of the page, things like optimizing font loading, responsive images have a huge impact and Mike Harper has a session tomorrow about front end performance. I'd say real quick, I'd be very cautious about H2 push because that will push assets on every single page load. It depends on whether you use the cache manifest stuff. It's available in some open source projects. Yeah, it's not all there yet. So it's something like to keep an eye on but I think it's something to be cautious about right now and I think if you have CSS that's getting on every page definitely aggregate that but if you have stuff for certain components that only show up on certain pages I do think it's worth breaking some of that stuff out of an aggregation based on my experience. Also H2 push is not limited to just assets. You can do things like pushing data down to the browser to tell it to do an early DNS lookup for something that you know that will also be included on the page. So if you're embedding something like Vimeo video on every page it can be worth using H2 push just to have it look up the Vimeo domains. I just don't think it's like a one stop like fix all things. There's a lot of copy apps. But all the things around front loading and image loading would be encapsulated in the time to first paint path. Sure. Okay. Anyways. Thanks. So you had a slide earlier with the old stack versus the modern stack with various versions of TLS, HTTP, et cetera. How do I know what versions of those I have? Say I'm running Apache for example. Is it based on the Apache version or is there more to it than that? It's full. Gosh. So I have offloaded this so much for so long to kind of see in partners and things that I haven't done this analysis in a while other than taking the word for it. You can definitely tell for some of these things. Like I can tell you that if you pull it with curl it'll tell you whether it's using H2. Okay. I can tell you that if you use open SSL with the like S-client thing that would tell you the version of TLS as being negotiated. I guess the HTTP header tells you what version of HTTP. Yes. As well as the curl path that I was mentioning. Quick is not that broadly deployed right now. So I don't know that good tools for testing that. Were there other things you were hoping to test in terms of versions? I just really want to know what I have. Okay. Will that tell you? Okay. All right, thank you. It'll definitely tell you if you have an insecure version. Yeah. And actually I think there's a lot of overlap between the insecure versions and the badly performing versions at this point. Any other questions? Okay, thanks.