 Okay, there we go. That's better. Now we can see what I'm looking at. So this talk is all about what effects, fun, and performance guidelines to have some better page load speeds, as well as kind of what TCP is, HTTP and speedy, or HTTP 2.0. You can find these slides at imcco.com slash 1000 milliseconds. You can also find them up on my blog at imcarreco.com, but yeah. So first of all, why is having a performance site so important? Most of this have heard we want to have a site under 1000 milliseconds, but we don't exactly know what set this number or why this is so important. So Google, Amazon, Bing, and a bunch of other large corporations have done a bunch of studies into user interactions with sites. And they find that after about 1000 milliseconds, bounce rates increase immensely on their site. Mental context switch happens after about one second, meaning a user goes from focusing on your content to thinking about, hmm, did I leave the gas on instead of actually focusing on the site? For Amazon in particular, this causes loss of sales. For both Amazon and Bing, it reduces the amount of ad clicks and actually hurts them selling things as well. We also have a lot of countries that are only using mobile devices at this point. More and more countries have primarily mobile devices. I'm thinking primarily Egypt right now, I think has 70% of their population only uses a mobile device. That is a huge bit. And as we all know, looking around the room, we all are using mobile devices as well for most of our internet connection. And we can't just rely on increases to speed of our mobile connections to fix this. 4G, although fast and is great and wonderful, it's not necessarily going to solve all of our problems just by making something faster. We need to prepare for 3G edge connections as well as just low bandwidth connections and high latency connections. We can't necessarily assume exactly the conditions where our user is going to be as they try to access sites, so we need to make it as fast as possible as they can always access it no matter what. So the first rule when it comes to creating a performance site is that all of these rooms, rules are more like guidelines. We have a few hard and fast rules like don't have any render blocking resources, but mostly every single site you're going to build is going to have a different set of libraries you're going to use, different sets of CSS, different requirements, different needs, different user bases, and all of those require you going through and trying new things, testing, playing around, and seeing what combination of these tools and ideas can be used to make your site as performant as possible. There are some things that might make a fast, one site incredibly performant by, say, inlining all the CSS in the head of the each document, but that might also incredibly decrease the performance for another site where we need that to be cached more readily. So that kind of leads to the second bit, which is test everything, much like Lieutenant Commander Data and Jordy LeForge do. Every single time you're going to make a change, test it, see what's actually helping, what might be helping, and what might be hurting it as you make these changes. And also the part of the testing everything means start with a baseline test before you try to do any optimizations. Know what your site is loading at right now from across wherever your user base is so that you know, okay, if we have, it's loading right now in roughly 2.5 seconds, and we want to get down to 1,000 milliseconds. You can know when we did this, then it increased it this much. When we did Y, it increased this much. When we did Z, it actually increased the page load speed a little bit, so we're going to undo that. So always, always, always be testing. There's a lot of tools online that allow you for all these tools. Web page test is a site that will test your site from servers from across the globe. Y Slow is a browser extension that will kind of give you hints on why your page might be slower. Page speed is a Google website that will analyze your site and also see kind of what might make your site a little faster. Chrome's DevTools also have a lot of great tools in there that you can look at to look at all your DNS lookups, download times, what's blocking, what's not. It is a amazing tool once you start getting into the depths of it. And Casper.js does a lot of front-end testing as well. Instead of going into how to use all these tools, instead, just tomorrow, Chris Schruppel is doing a presentation on automated testing. Go to that. It is amazing, and he will go into all of that and more for how to do all your testing. So the first thing is we want to make sure that content gets to the user as fast as possible. So the user can see what we want them to see as quickly as humanly possible with all our devices. To do that, we need to remove any render-blocking resources. Render-blocking resources are things like kind of a hard image to see, but we have the ball-rog and Gandalf. And a blender-blocking resource is Gandalf sitting there with your website, the ball-rog, trying to get through saying, nope, you can't do this. So it's a CSS file in the header, a JavaScript file in the header, things that, as the browser is parsing, will stop and try to get that resource first before actually displaying any content. So to give an example, this is kind of, this is from Ely Gorky's talk on the 1000 milliseconds. When a browser goes to get a web sage, what we'll first do, our box on the left is the network. So it will set up a TCP connection, do make an HTTP request, and we're gonna go over that second. But then it will grab the HTML. And as it loads each CSS page, it will grab the CSS. From that, from the HTML, it creates a DOM. And from the CSS, it creates the CSS object model. It is Psalm. The browser will not render anything on a page until it has both of those pieces in completion. So it will take both the DOM and the CSS object model to create the render tree and eventually the layout and paint, screens, pixels on the screen. Now, there's also the side note of you have JavaScript, which can delay the execution, but is not required like the HTML, like the DOM or the CSS on this. So our goal is to get, we need to get the DOM, JavaScript and CSS on to get as quickly to the user and not prevent any blockings with any of those three pieces as all three of those will prevent a paint from happening. So first, for JavaScript, the quick and easy things that we can do, move JavaScript to the footer. If there's any JavaScript that is not 100% needed for the paint, move it to the footer. This will allow the entire HTML document to be parsed, loaded into the browser, instead of pausing and waiting for those files to be loaded up. We can also utilize async and defer tags on our JavaScript files, and these will kind of do exactly what they sound like. The async tag will asynchronously load a JavaScript file as opposed to blocking it, and the defer tag will also asynchronously load it, but then defer it to run until after the DOM object load, DOM content ready. We can also inline critical JavaScript. So if there's anything that must be run before the paint happens, we can inline that directly. For the first two, the magic module will actually help you do both of them in Drupal. It's as simple as enabling it, going to your theme settings and saying, move JavaScript to the footer. And there's also an option to leave your libraries in the head if there is a module or a WYSIWYG that will freak out that all your JavaScript is in the footer. CK Editor is actually built and has been patched so that it will work completely with all of your JavaScript being pushed in the footer. So that one is okay. The WYSIWYG module I cannot comment to. For our CSS, which will also block, we can inline critical CSS. It's right, the fold is back. So putting all of the CSS inline into the top of the document that will allow the fold to be displayed quickly and easily and correctly. So for this, there's some awesome gulping grunt tools out there that can help you with grabbing the CSS that is critical. What I have been playing with is loading the CSS as critical for a mobile phone approximate size. So something like 400 by 600 viewport sort of thing. And then from there, it will load up all the rest of the JavaScript asynchronously using load CSS. So load CSS will allow you to, instead of having your CSS be blocking, it will load it with JavaScript after thus preventing your browser from pausing to go download a CSS file before it actually prints anything. So with a mixture of those two things, you can make it look like your entire site is loaded and printed and looks pretty and beautiful without actually having everything loaded. Therefore getting content to the user faster and everything else will load in the background very quickly. Currently, there's no really an out of the box solution for Drupal for this. It's yet. I'm certain there's someone here who really wants to take on creating that contrib module. But also part of this is it's needed. There is a little bit of personal testing here in afternoon to make sure that this is accurate and working correctly as you would want. So if we get all of our render blocking resources out of the head, the next piece that will cause any sort of issues with our performance and our page load would be a TCP, the internet going and grabbing the document from the site. So TCP slash ACP connections. Sadly, ACP 1.1 doesn't really utilize TCP as it's meant to be used. Therefore, we have a bunch of tricks and other things that we do to make it a little easier. So when it comes to thing about TCP requests, I always think of the movie Apollo 13. This is from the iconic scene where you have, Houston is calling up to Odyssey, the command module as they're coming back and they have a three minute outage where they're waiting to see if they didn't burn up into nothing. And kind of the communication between Houston and the command module is similar to the issues that we have when it comes to TCP requests. It isn't the fact that they're sending a lot of data, all they're sending is that. Odyssey, this is Houston, do you read? But it takes forever to get back and forth to the moon. Because it's not the size of the load that's the issue, it's the back and forth time. There's only so fast light, or in their case sound sometimes, can travel between these two places. It's why when we are talking with Mars rover, it takes minutes just to send a single message and a return message. So it's not really speed of the connection that's our issue, it's latency. It's the time back and forth between us and the server. So these two charts, on the left side, we have the page load time as we are increasing bandwidth. As you can see, the far left is one megabits per second. Takes a long time, but then it slowly tapers off that once we start getting past about five megabits per second, it doesn't really help the page speed time to have a faster connection. There's a point of diminishing returns. However, the right side is the page load time as we decrease latency or the RTT round trip time between us and the server. And that is a straight, linear association of if we can decrease latency, the more and more we can do that, the faster and faster page will load hands down. So it's not going to be Google fiber that saves us with a faster internet connection. It's going to be our websites lowering the latency between us and a user as best as we can. So to give some numbers, a single round trip, we have if we're on Google fiber or a fiber connection, a DNS lookup will take about 40 milliseconds. So to go from your computer to a DNS server to figure out how long this will take, a TCP connection will take about 60 seconds to create a connection between you and the server, more or less. TLS handshake. So if we want to use HTTPS, our SSL encryption, it'll take between 60 and 120 milliseconds, depending on how our server is set up. And then the actual HTTP request. This is the point where we can actually go and tell the server now that I've initiated a request. We've set it up with a SSL connection. Now give me index.php is about 60 milliseconds. So that puts us at about 220 to 280 milliseconds that we've already spent just to initiate a single TCP request. So we have about 700 milliseconds of leeway time to actually make this the ability of loading our site in less than 1,000 milliseconds. Problem is, when we start going to LTE or 4G, those times increase. It's about 100 milliseconds to go back and forth between the DNS server. Actually, that's a required value to be real, true LTE. So that puts us at 400 to 500 milliseconds, meaning that half of our time is already taken away just in the initiation of a single TCP request. And when it comes to 3G, that time is increasing even farther. The latency for a 3G connection is immense. It will up it to about 200 milliseconds for a single round trip, which turns it to 800 to 1,000 milliseconds to initiate a single TCP request. Now, all of these numbers are assuming that on our phone, we already actually have a connection with a tower that is enabled and active. So if someone has their phone in their pocket for, let's say, a half hour, and they get it out and load up Safari and go to your web page, it will have to pause even longer to actually go and get a direct connection to the cell tower before they can start this. So at this point, if we don't return something back in the very first packet response on a 3G connection, then we've already missed our target. So the first thing we can do to increase latency is the simplest version. Use a CDN. So fairly simple idea. We are most limited by the speed of light, by the distance it takes for light to travel from you to a server. So put your content in between those two points. Make servers closer. Utilize a CDN when you can. There's a bunch of services online between CloudFair, Mac, CDN, Amazon has a CDN. If you can use one, and if it's needed, do so. Now, there's also a point of, if your entire user base is in and around LA, you don't need a CDN in New York City. However, if your user base is all over the US, use a CDN in Dallas, Chicago, LA, New York City so you can have the fastest connection time for all of your users. The second thing we do is get everything to respond in the first packet response. So TCP will respond according to RFC6928. This is standardized, and this is for newer servers. And you can check to make sure that your hosting company and your server is actually set to 10 packets. This is the initial response that will happen after a TCP request is made and that first HTTP request is made. And it's 14.6 kilobytes of data will be returned in that first set of packets. A good example of this being done is my blog. If you go to the home page or any page, it will load the entirety of the site in less than 14.6 kilobytes, and then it will load everything else asynchronously after that. So it is purely the time between your phone creating a TCP request will load up the full site. This is the best way to actually have performed site. At the same time, this is also incredibly difficult. This will require a lot of tuning your Drupal installation, loading everything asynchronously and doing everything at 120%. There's also a bunch of tricks we can do with HTTP 1.1. Most of these you've probably heard before, but most browsers will open six TCP requests to connections to offset the fact that it'll take so long to initiate a single one of them and then we can download six assets at once. So the first thing we do is called domain sharding. It's where we have our assets load from several different subdomains. They're all tightly loading the same assets as the main domain would, but by having all these subdomains that it's loading from, it will then load six TCP requests for each single domain. Thus allowing us to have, instead of six TCP requests loading all our assets, let's say eight TCP requests, therefore you can load eight different JavaScript CSS or image files at the same time while not having any slowdown. We can concatenate our files. So putting our files together, Drupal does this for us automatically for CSS and JavaScript, which is great, but this will allow, instead of us having to go and download, initiate more TCP requests to download more and more assets, the less assets that we have, the better. Spriting, same idea. One big image is easier to download than 13 tiny images because each one of those require more back and forth from the server, which take longer than just receiving more data, and we can also inline assets. So where it makes sense, possibly base 64 encoding, some very small images, very small images, inlining essential CSS or JavaScript, things of that nature, because each one of those will require less TCP requests, which take less round charts between you and the server. And really the best thing you can do is the fastest request you can ever make is the one that you don't make. Get rid of things on sites that you don't need. If you're loading up the entirety of core CSS, why? You can remove most of that and maybe add the pieces that you do need into your own theme and that will remove more and more data that would otherwise be added. So the best way to just fix issues is just make less requests to a server. So that's all for HTTP 1.1. Coming down the line, however, is what's called speedy or HTTP 2.0. This speedy is a protocol developed by Google and a few other people that is the base for what HTTP 2.0 is going to become. It is fantastic. So what is speedy? For those of you who have not heard of this before, speedy is a lot of wonderful things to increase performance and its goal was to have decreased the page load time by 5% just by using a different protocol. So the first thing it can do, it can do multiplex streams, meaning it can send multiple pieces of content back at once as opposed to having to initiate six different TCP requests and each one downloads a single piece of content. A single TCP request can download all of your content. Thus, instead of needing the full round trip time for each one of those, so the back and forth, back and forth, back and forth to initiate TCP requests, it can use the one it's already created, so it can do a download everything and a single TCP request. Less latency, better for us. It can do request prioritization by having this CSS file and this JavaScript file is the most important, get that there first and then all these images can load after that. So it can prioritize what is needed by the user before anything else. Server push, this is what I'm most excited about. This means instead of currently where a user goes or a browser will download, say I want index.php and start reading it and say, oh, there's a style. Okay, now I want this style file. It will download, it will get the index.php and with that your server will say, okay, you also need these two files and it's like, okay, and it will start downloading those files immediately instead of having to wait for the browser to initiate their request, the server will do it for us. Thus, again, less back and forth, less latency, faster site, all good things. It also removes redundant headers. So currently, all of our requests will have get index, the URL, the accepting coding, all of these headers that are repeated constantly throughout a cycle. HTTP 2.0, remove that, so it will set up your first request to have all of the headers it needs and then from that point it will only send the headers from that are different. So if there's a different file or if there's something new or changed, it will have the change happen instead of the entirety of the headers. And it also compresses our headers. So when we think back to the first packet response, that first packet response includes not just our site files, it also includes any headers, any cookies that we're sending along with it. So this will compress those headers so we have more space for our page as opposed to headers that need to be sent along. So tricks 2.0, things that we can do with Speedy that will make our site faster. The simple version is pretty much don't do anything we did in HTTP 1.1. All these things will actually hurt performance in HTTP 2.0, which kind of sucks. So things like domain sharding. Domain sharding helps us most of all because it starts more TSP requests, thus allows us to download more things but since we only a single TSP request with HTTP 2.0 and that will download everything for us, if we shard our domains that means instead of getting all the performance power a single request will have multiple requests that we're getting. So sharding is something we shouldn't do if we're using Speedy. Concatenation can also hurt performance. So if we have some files, let's say our jQuery and our jQuery extensions that don't really change that much, like we're not going to be updating jQuery, we're Drupalers. Got one gap out of there. But if we concatenate that with say our site's JavaScript file that we are updating once a week with changes, updates, new things, then every time we update that, if we concatenate everything together, it will go and download all of jQuery, all of all of these extensions and all of our site files. However, if we don't concatenate those together, then we have our jQuery and our library files that are pretty much static will stay in browser cache and then everything else will get loaded asynchronously or everything else will get loaded as it changes. So we can load this much every time they can change the post to this much. So utilizing server push to four important assets, this is the one thing that is going to be interesting to them and I have yet to do a lot of testing on implementing server push. This is because only Apache has implemented server push I use in GenX which hasn't gotten there yet. But using server push to actually send out all of our assets along with our files is one of the most important things that we can do. And this will decrease the back and forth that we need and will be fantastic and wonderful and all of the angels in heaven will cry tears of joy. So there's a speedy module that I'm researching to see how we can implement server push in Drupal. It's a little bit more complex so I don't want to just throw out some code and say use this. I'm doing a bunch of testing right now to see what we can actually do to implement this properly. So a case study of kind of putting all of this stuff together, what it can achieve and what happens from doing this is my blog. Because if I broke my blog, I wasn't going to get a complaint from a client. So if you want, you can go to imcareer.com preferably not all at once, just tag yourself so you don't destroy my server. But I did a lot of testing to see what could actually help a site, what will help performance, all of these bits and pieces that I have just talked about. So utilize this speedy. It is not using server push, but everything else is using speedy 100%. It has a custom CDN. I'll explain why I need to do the CDN in a second, but I have a server here in Amsterdam, one in New York and one in Singapore that is delivering the content at all times. So unless you're in Australia, Africa, or Latin America, because DigitalOcean doesn't have servers there yet, your content will be very, very close to you. And then to do that, I actually moved all my DNS to Route 53 on Amazon Web Services. Amazon Web Services allows you to use different IP addresses depending on the latency between the user and an Amazon data center. So Amazon keeps track of which data center is closest to different IPs, thus increasing or decreasing the latency as best they can. I also inline all of my critical CSS on the first page load. So when you initially go to the site, you'll get the entirety of the site CSS in your header. And then it will load up asynchronously the styles and the fonts from there. On the second page that you go to, it will just put the link tag as it's assuming that you're going to have all of your files in the browser cache. So this decreases the idea of the fault or the initial page loads being at the first by putting everything in line. But it also doesn't decrease further page loads by trying to do this asynchronously in the future by putting everything, just assuming that you're gonna have it in the browser cache which you should have asynchronously loaded on the first page. So both the first and subsequent page loads will be just as fast, if not faster. And this is the reason why I had to build my custom CDN. Doing a solution like that requires a enterprise plan with any CDN and I love you guys but I don't have $5,000 a month to pay for that, hence me building my own. And then it loads all the rest of the CSS asynchronously. So all of the fonts and all of the style.css will all load asynchronously after the initial paint. So the results, my page speed score and why slow score are both 98. I am one of those points is thanks to Google Analytics. Their script doesn't use expires tags in the future because they need the ability of changing it constantly. And then I also don't gzip some of my assets because the asset is so small it would actually take up more browser time to compress it than it would be just to send it plain text. Pingdom, the score of 93 I forget exactly what they want me to fix or change but I'm getting about sub 500 millisecond page loads or content paints any of the places I've tested from. So across Europe and the US and in Asia. Can't test from China though. My site is blocked there. Web page test, the same thing. About 400, 800 millisecond load sign. Usually always occasionally my server responds slowly and I get a beep. So what's next on my testing thing for this site? There's actually one, there's a full blog post and the code for both my site and my server setup is completely online on GitHub. If you go to my site, there's a blog post on exactly how everything happens and it's purely open source. What's next in line for things is just to never stop improving. There's always gonna be something new or little things to try and test and play around with and both with my site and with any site that you make I highly encourage you to play around, test new things, see any little performance benefit that you can get, try to hit the big things first. GZIP assets, CDNs, loading CSS asynchronously before trying to do any weird hacky things like loading two different pages depending on if your assets are in browser cache. It helps, but do the big things first. With that, are there any questions? Yes? So the first thing you notice when you go to I'm careco.com is it has no image assets whatsoever and very thinly styled. So what do you say for, when that's not an option and say procedural images aren't an option either. You are absolutely correct. I cheated immensely in that my site loads because I don't use JavaScript or images. Sorry guys. I said there's a couple of things. One, optimize images, things that I need to get them as small as you can. Where possible, always use SVG over other file formats because SVG will, you can GZIP and it's all text. Make sure you're using the right file format. So if you're using PIN images for photos, don't use JPEGs, use progressive enhance JPEGs, any of those fun tricks. And do as much as you can to actually, try to remember if images load asynchronously anyway. So it won't affect necessarily the first pack response issue but make sure that past that point you're making them as small as you can. And the idea of the post of one less JPEG, it is better to remove an image from your site than it is to say remove JQuery. Because a single JPEG will be much larger than JQuery does. So that depends how we generally go about it is we have a project, we set a performance budget. So we have a specific budget of the site must load under these circumstances at this speed or do this. And we set those and those are relatively firm. So when it comes to designing on the web, it's not about creating a pretty picture. It hasn't been about it creating a picture since we started using Flash and may it rest in peace. At this point, building a site isn't about, you have to utilize the constraints of the web. Constraints of the web put, browsers can't do some things. The object model can only do so many things. The box model can only do so many things for us. But one of those constraints is also performance. Explaining exactly what will help performance and hurt performance to a designer isn't always the easiest thing because you can quickly go down a rabbit hole. But a designer's work must be in parallel with a developer or a performance engineer so that if a designer comes up with, I want this giant image or I want a giant video on the background that's going to be playing constantly and it's going to be 30 minutes long of a fireplace. A developer should sit there and say, no, like that's not a good idea. And they should have the power to overwrite them and say, this will hurt our performance budget that we set at the beginning of the project. Therefore, we cannot do that. Yes. I have a bar issue, some other reverse proxy software. Can you repeat the question? Sorry? I have a bar issue or some other reverse proxy cache software. Any rules in this PD environment? Yes, so this is describing it further in my blog post, but for my site, it loads, there is a InginX server on the back down that sends the site to varnish, which is a static site cache, which then sends it to InginX again because only InginX can use speedy. There's actually a presentation going on literally right now called the speedy sandwich. They talk about this in more detail. But varnish isn't capable of serving SSL at all and especially not speedy. And the guy who writes varnish has supposedly said he doesn't find this interesting as he doesn't plan on implementing it. So generally you're going to have to do some sort of sandwich with varnish to get it to work with speedy correctly. At the same time, it is still much faster to do that than it is to do anything else because varnish is a needed thing. The other thing I do, and again, the blog post goes into the technical side, varnish can do some simple what are called ESI tags or edge site includes. So this is what my site is mostly static in that on every page there's an ESI tag that says load up the header. And the header is the one that takes a cookie and says, okay, is this in the cache? Then send the inline bit, if not, send this. Thank you. Yes, yes. So the CSS loading actually happens on every page on my blog. And it does that no matter what your point of entry is and I think that every site, you can't assume the point of entry. You're 100% correct. But just because of that, it still means we can still put all of our header and our base styles that will at least display content in a reasonable manner to a user quickly. As far as the speedy requires SSL, that's true. And although that will require one more full round trip between you and the server, protecting your users in today's day is worth 100 milliseconds of your budget. And I will stand firmly by that one. So, well, two things. One, going back to the test everything of, everyone should be testing things as they're building things. And if you're doing just say minification, there is no reason why minification alone should break your site. If that's true, then there's bigger issues at hands that need to be fixed. We already have Drupal Minify or CSS. Minifying JavaScript, there are modules that can help you with that. There's both Uglify and the speedy module. I prefer Uglify.js does it on the fly. But at the same time, none of those are the best solutions. I've been playing a lot with trying to find it better. Anything else, questions? Yes. So Google Analytics isn't actually hurting my page load. It's just hurting my page speed score. So it's purely a point of pride that that's hurting. As far as loading external services, so I actually had a similar thing where I had a client who was using one of the many social networking things, where it loads up all your fun widgets. And what I said is a couple of things. One, if they're just using Facebook, Twitter, and Google+, or a small section of things, creating those buttons yourself is incredibly easy. All of those services has a simple link that you create that you just have a hyperlink for. So there's no reason to use services like that if you don't need to, and prevent from using them as much as you can. At the same time, some of these services are being a lot smarter. So for example, add this has a way of asynchronously loading the code. So asynchronously loading code will not hurt your paid load score. Your content will still get to the user, it will still load the pages expected, and then add this will go and do its thing. And add this is, this is actually pretty good. So if you have to use one, I would recommend them. But prevent, make sure that you're using the most performant possible way of loading the scripts. And most services should have a document describing that. If they don't, and they don't have a way of asynchronously using them, find something else. Because there's too many really good services out there that do asynchronously load things. There's no point in using one that's just going to hurt your page. Thank you. Thank you. Anything else? You have to make a choice. You have to either choose for HTTP. Yes. Is there a way to like, I can imagine scenarios in which you would want optimizations for both, like you have a website, specifically for each, for example, but you also know that there's high rich clients, how it has MacBooks, et cetera, that can't support speedy software. Is there like a way to build like an if statement that checks whether or not the... Yes. So if you're using Apache, I don't know what this is for IndianX, but you know there is a global environmental variable that is set to true if you're using speedy. So you can deliver a different page depending on if you're using speedy or not. That is the best way. It also requires a little more backend coding, which is fun. But you're right, there is, you have to have a modern browser to use speedy. HP 1.1 is going to be around probably for at least another six to 10 years. So it's something to think about and to prepare for and to know this is down the line, but it's not necessarily something unless you have the ability of tuning to that level, since our tricks for both of them are completely separate. And in fact, again, all the tricks that happen for 1.1 hurt for HP 2.0. That being said, speedy is a lot more prevalent than you realize. There's actually a cool browser extension that you can get for Chrome that will tell you when speedy is being used. And you quickly realize that anything with Google, Facebook, Twitter, all of your really big sites have all been using speedy for years. And they've been using speedy to increase performance immensely. So the big sites have all been spending the manpower to do that. It might not be plausible for, say, a tiny site or your local pizza kitchen to do, but if we're working on a big site of performance is really an issue, it's something that we should be looking at and thinking about. Any more questions? I have not. That's partially because, what? Repeat the question, please. Oh, have I tried to use any JSON or HP templating on the front end? And I haven't because I don't like JavaScript. And this is one of those things that I'm a front-end developer who avoids writing JavaScript for the DOM. Don't tell my boss. I know. So that thought hasn't even crossed my mind but it does produce an interesting question. I don't know. I'd have to take a look at it and see what's possible. Anything else? Okay. Before I leave you today, go, if you're really interested in this, Elia Grigorek has a book, High-reformance Browser Networking. It is fantastic. It goes into detail about why these things matter, what are the issues with TCP, HP 1.1, how speedy works, why it was developed, how wireless networks works, how 4G, 3G, all of these things work at a base level. I can't quite tell you if it's answered more questions or gave me more questions, but it's certainly, this is the book on having a high performance, using the browser in the best way possible. Again, my slides are at imcco.com slash 1000 milliseconds or on my blog. And thank you. And if you can go to this page on the Drupal.com website and there's a way to fill in a, how did I do section? I would be really appreciative. Thank you guys very much. Also, stickers and buttons. And t-shirts. Because I bribed people before they gave me that.