 fel ydych chi'r rhaglenau yn dweud ymwyllfa newydd yn fawr. Mae ydych yn fawr ffa o'r gyda'r newydd. Mae hwn yn fawr yn holyda i gael gyda'r newydd. Mae wedi ydych yn gweithio ymdweith ym Mwgol Cysolwyr, ac mae'n rhai oherwydd yng Nghaerthau. Felly mae'n gweithio'r cyfnodau, mae'n gweithio'r cyfnodau, lle mae'n gweithio'r cyfnodau. Mae'r iawn wedi bod ni'n gweld yn byw'n gweld yn gwybod maen nhw ymwn i ddiogelor. Rwy'n meddwl i'n ddarn wahanol lleol, oes yn ymddir yng Nghymru, mae'n meddwl i'r parwg i'r hoffi'n gwirionedd, a'r hoffi'n fath o'r hoffi, bod yn ddechrau. Yn ddiddordeb, yna lleol yn gwybr. Mae'n ddiddordeb ar hoffi, rydw i'n ddim yn gen i fod y lemur hyn nhw, i ddim yn ddiddordeb. But the thing that we need to think about is speed. So, there's a great article. Who knows what this is? That's correct. It's the save icon from Microsoft Office. It used to be a physical object, but you don't see them any more particularly. This article from a guy called Pete Davis, and he asks how many floppy disks a modern article on the Atlantic takes. If you were to download it and put on some floppy disks and pass it around to your friends. So, it was this article about dinosaurs, which is awesome, because dinosaurs are awesome, as my toddlers tell me. So, how big, if you were to take the four pretty large images, stick them in an RTF file with the text of the 6,000-word article, how big would you say that is? Any guesses? Sorry? Actually, only 400K, and most of that's images. So, the text is about, I don't know, 37K worth of just, you know, ASCII text. Fits on a floppy disk with space to spare. If you were to download this over the internet, on the Atlantic website, a modern publication, that's how big it is. We're on the extreme end here. A lot of that stuff is video ads and stuff. When I visited with an ad blocker, it was only about 3Meg. So, that's sort of nuts. It's this many floppy disks. That was a productive half an hour on the internet, by the way. Anybody recognize any of those, still got them in their loft? My particular day of the tentacle. Absolute classic. There's a small coder, of course. This article, by Pete, was published on Medium. Oh, well. So, at this point, I'd like to apologise that despite my keynote foo, it doesn't have like a ddiddledoo, ddiddledoo, Wayne's World style take us back into history animation. So, this is what we have. Let me take you back to 2010. The impossible chasm of the Rails 2-3 migration. Ruby 1.92. People were walking around with iPhone 4Ss, like giant-handed savages. In those five years, Marvel and Disney have built an entire cinematic universe. Bigger, bigger, bigger. More characters per film, even larger giant ships crashing into even larger things, buildings, planets. As web developers and designers, we've been doing the same thing. This is the growth of web pages from archive.org, which takes a snapshot of the web as much as it possibly can. You can see here, JavaScript has exploded over the last five years. This is my frowning face. You can also see the rise of web fonts on the top there in the last couple of years. However, even this ballooning JavaScript framework fonti nonsense pales into insignificance next to images. A great deal of our pages are images. As Ruby is, this sits in that weird halfway world between us and the front-end guys, or if you are someone who does the front-end as well, that is your problem. Our pages have got three and a half times bigger in five years. That's bonkers. The average page is two megabytes big, the average page. Thankfully, now that we live in the future, this is no longer a problem. That is true to a certain point of view. We have a regulator in the UK for broadband, and we've gone from five megabit to just over 20 megabit in those same five years. You're like, okay, pages are three and a half times bigger, and we've got 4G now, so everything's cool. Obviously, high speeds all around, as we all know from our experiences, is not the whole story. Only a third of connections in the UK are high speed, and even amongst those at peak times, only 10% of people are actually seeing the speed they're paying for. This is why Netflix doesn't work in my house on a Friday. Smartphone connections have also increased from 20% of the market to 60%. Basically, as we've been making pages three and a half times bigger, our networks have got nearly the same faster, but everything is a lot more shambolic in terms of the network connections. Things are harder. Everyone's used a phone and gone, why isn't this gone on website loading? Well done, everybody. We've done a really brilliant job. There are countless examples of speed being a good proxy for how customers work with your service. Famously, Amazon are able to financially measure the differences in page load time in the size of the carts that people are paying for. I've personally seen similar things at House Trip. We make pages faster, users get further down towards buying their holidays. It's actually helpful for people. All this talk of cash, however, is slightly unseamly for a Ruby conference. What if it's just about doing the right thing? There was a guy called Chris who was in a meeting at YouTube listening to his senior engineer rant as senior engineers all want to do. He thought he'd have a go at getting a 1.3 megabyte YouTube video page under 100K. He called this project feather and took a few days. Getting down to 250K was easy. He dug into the code. He began to optimise the HTML, the CSS, the JavaScript. After three painstaking days, he was still not under the 100K limit. Pleasingly, he'd also just built the HTML5 video player. He dropped that into the page in favour of the enormous flash monstrosity that they had before. Boom! 98K, only 14 requests. He added some basic monitoring because it's Google. He launched the opt-in to a fraction of Google's traffic. They collected some data and the numbers came back. They were completely baffling. The average aggregate time to video view under feather had increased. Not just increased the tiny bit, increased a lot. The total page rate is a tenth of what it was. Somehow the numbers were showing that it was taking longer for people to get to viewing the videos. Nothing made sense. Cats and dogs living together mass hysteria. He was about to give up on the project when his colleague discovered the answer. Geography. When they plotted the data geographically, there was a disproportionate increase from places in South East Asia, South America, Africa, even Siberia. Even now, the average load time under this feather page was over two minutes. This meant that the regular video page at one point Something Meg was taking more than 20 minutes to load. So entire massive populations of people had just simply not been able to use YouTube because it took too long. So despite the two-minute load time of this feather project, watching a video actually became a real possibility for people. Large numbers of people who were unable to use the service were suddenly able to. So all these stories are well and good, Andrew. But what can we do? So that's it. Thank you very much. There's various things we can do. We can test things properly. Good start. Test things on actual devices. Also good. So this is an example of a programme at Facebook where they're trying to think about the privilege they have as Facebook developers, but only in terms of their internet connection. On Tuesdays at Facebook now, they have a ball across the top of the Facebook website that says, do you want to experience this page as if you were on a 2G connection? It's a bloody good idea. When we are developing, typically, we are wired in, we have nice shiny internet connections, we have high speed. We're not thinking about users who don't have connections as good as we have. So we're going to do some responsive imagery code here. I used to be a front-end guy. I still do a bit. Inevitably, I'm the guy who gets the CSS on an existing Rails project. Do not be scared. I am here to hold your hand. There will not be a test. So this is new syntax. Some of you may be familiar with it. It just looks like a normal image tag. You'll notice there's a source set attribute. This acts as a suggestion to the browser that if you have a high resolution screen, a high resolution version of these images. This is a suggestion for the browser. It's not a compulsion for the browser to load these things, but it stops us doing that crazy thing where you serve a two-times image and then just box it in using the width and height. The best thing about HTML, of course, is that if a browser, and in this case, old IE, not new fancy IE from Windows 10, but old IE just ignores the old stuff, you might have heard your friendly local CSS nerd use the term progressive enhancement and this is an example of that. So getting a little bit more complicated here. So via source set here, you can see the browser knows the resources. So it says if the size of the image there, the poster, is 600 browser pixels wide, load the 600 version of the JPEG. If it's 1200 pixels wide, load the 1200 version. The sizes attribute provides the width of the image at a given window width. So it's like a CSS media query. So in this case, it says if the screen has a minimum width of 640, it's going to take up 60% of the viewport width. That's what the VW stands for. And otherwise, it's going to take up the whole width. So on smaller devices, this image is going to take up the whole width. And then the browser can interpret this stuff to find the right images. And as you can see, only bad old IE doesn't understand this. But even then, it drops back to the source on the image tag. So this is where we're starting to get a little bit crazy. Do not try and read this. Do not note this down. This allows you to provide alternative sources. So in the world of responsive web design, when you change the browser and there's a drastic change in page layout, so you can provide, say, a portrait image at certain sizes using the source tags inside a picture tag. This is supported well in everything except for Safari. There are, like, JavaScript picture fills and stuff like that. But this stuff is coming into browsers now. You can also serve multiple formats as well. So, for example, Chrome out on their own support an image type called WebP, which is a very well-compressed image. And you can just provide a mind type and then provide an alternative inside a picture tag as well. So this works in Chrome, Firefox, Opera, and falls back. And again, falls back to the image in other browsers. This is all non-destructive enhancement for your users. If you really want to blow your mind and have a little cry at work one time, you can read this article on the Opera website. The fact is, we are going to need to start providing a lot of appropriately sized imagery if we want fast pages. So this, I'm pretty sure everybody has this in one of their Rails apps. This is genuine code from House Trip. I apologise. This is paperclip. This is how you specify an attachment on a model. And to be honest, I've actually tidied this up and taken a couple of bits out because it wouldn't fit on the slide. I'm pretty sure everybody's seen this. Show of hands for anyone who doesn't have something or has seen something like this. Good. Blank faces everywhere. Yeah. So get to the point. This is all about serving images at sizes and not having to go through that step that you do with things like paperclip and carrier wave where you upload them and then you preprepare lots and lots of different sizes of images. That basis, that basis you that you know what's coming down the line, what design changes you're going to have, what image dimensions you're going to need in the future. So at House Trip, I took on a sort of little side project. Bearing in mind the code from the previous project, every time we add a new image, you know, House Trip is a, it's a holiday website. It's lots of pictures of people's houses and flats and villas. Every time we add an image size, a new image size, we have to generate all of those images, which obviously makes our architecture cry because when you're shopping for a holiday, you're mostly looking at images. So I had these constraints. We have these original large images already available on S3. We know where they are. We don't do anything funky with resizing. We just resize and crop and expect the image to fill the size of image that we've built. We serve everything from a CDN because we're not crazy. And for me, I wanted to see if it would be a small enough service to deploy on Heroka and this is the less code track, so good thinking crawl. And also, we didn't want to like do any big bang launches. We didn't want to switch it off and then switch on the brand new image thing and have it all fall over. So being a sensible thinking small, I grabbed hold of Sinatra and I also grabbed hold of Dragonfly, which is a simple wrapper, it's a very nice, well-written wrapper around the Unix command line craziness of image magic. Image magic, you might remember, is the thing that constantly causes your app to fail to compile and cause deployment problems between the years of 2010 and 2012, as far as I can remember. A logo for image magic, that little wizard. God bless. So it's really simple code. It's three gems. One of them is only because I wanted to get a multi-threaded web server on Heroku. Inside your app, you take, you configure Dragonfly, very simple, nice syntax there. I'm the kind of nerd who thinks in URLs. This is basically the simplest API to this service that I could think of. Those little bits at the beginning are mostly the same as the constraints of the images, a geometry string in image magic terms. That first one is a fixed width of 400. So resize the image to 400 wide and then whatever keeps it in proportion height-wise. The next one is fixed height and keep the width in proportion. The last one, not exactly true, is fixed both. So you basically resize the image to 400 by 300 and fill as much of that space with the images you can. So I built a little utility class inside my Sinatra app that basically checks that that string is valid and then adds the little hash on the end to make it understandable by image magic. And one single root. So checks for the validity of the size geometry string. Then it takes all of the params from the rest of the string and presumes that is where you go and get the original image over HTTP. Then you resize it and then you turn it back into a response that RAC understands. So pretty straightforward code. However, now we get into the architecture of how I'm putting this together. So simply put, we're letting the internet take the strain in a way that we all do pretty much already and getting this logic away from the application. These are just the services that I chose to use. We already had images on S3. We were already using Heroku. Other CDNs are available. This is my super simple. Can I build a small version of this architecture? So you add your devices browsing the modern internet. The devices make a call to the CDN. It calls the Sinatra app. The Sinatra app runs off and gets the S3 image, takes it in, transforms it, serves it back to the CDN and then the CDN serves it back to the device. And then subsequent calls for that URL. Obviously, it's a CDN. It only hits the cache there. I put it live on GitHub. No stars. No, two stars. Sweet. There's somebody giving me a slow hand clap there. That's very good. So how does it perform? So you basically just grab any old images from the internet. I'm so excited about Star Wars. In this case, this image is about, it's a photographic image from a trailer. 1.1 megabytes, about 2,000 pixels wide. It's a decent size of image. It's fairly typical to what we were using at House Trip for the uploaded images. So in order to test this, I take my lovely sexy architecture and explode some of it. You put the image directly onto S3. Anyone know what this is? Any guesses? This is an Apache bench, which is a command line utility which you can use to basically hammer your website. It's a great little tool. It's fairly unsophisticated. It comes with the standard Apache distribution. So basically, if you have a Unix-based laptop, you have this command line program. Hammers are not sophisticated. So let me prefix all of my testing with that. So this URL basically says, do 1,000 requests with a concurrency of 20. So do 20 at a time. So basically, I'm trying to hammer my Heroku app without the CDN to see how it performs. Say you've got a whole bunch of new images happening. Just hammer that app and see how it behaves. If your Wi-Fi was slow when you arrived at the hotel yesterday, that was probably me. So this is the stats for taking that original 1 meg image and doing a 500 pixel-wide version and a 1,000 pixel-wide version. So roughly a third the size and 2 thirds the size. Watching the logs as this was happening, there was quite a large variation in the time it was taken. But even as I expanded out, I did a couple of tests where I did even more requests or shorter requests, but the average time to do this was about the same. Even as I added more workers to the larger dino sizes. These different graphs are the various different sizes of dinos you can pay for on Heroku. We can see here that actually, on Heroku at least, there's not seem to be much of an advantage to the more expensive stuff. It's possible I'm doing something wrong here. You should probably ask the Heroku guys if I am. I certainly will be. But certainly at the lower end, the free, the hobby, and the one-times dinos are all basically the same machine. And they're all, you know, they're sub-blogging, they're sub-blogging, they're sub-blogging, and they're all, you know, they're somewhat subject to the whims of the other people you're sharing the servers with. So I did notice, as I was doing some testing yesterday, that I did some more tests last night when I've been doing some testing in the afternoon, and the times were different, but between the different dino sizes about the same. So like depending on how much you're sharing your resources, it made a difference. But the main point is that these response times are feasible as response times. Well, they're not brilliant, but they're not sort of like, oh my god, it's going to take forever to transform these images as I do them. So the benefits of using the right size images for the end user in terms of file size are really not bad at all. So rather than serving an original image and letting the browser shrink it down, if you serve the appropriately sized image, you know, you can save, you know, all images 10% the size. It makes sense. So not bad, not bad at all. One of the benefits, however, of the human eye is its fallibility. It's the reason we have JPEG compression, that sort of stuff. We can do better. Well, actually, somebody else can do better and I can copy them. So a much more intelligent man than I, Dave Newton, spent a lot of time learning how image magic works. And these settings, simply typed into your command line program, give us much, much better compression than the standard image magic resize. How much better than these? A lot better. 15% the size of the original naive resize implementation, which is huge. So I reran my benchmarks again. And it looks like, obviously, that some time elapses when you're doing 1,000 image requests over hotel Wi-Fi. It looks like it takes about two to two and a half times the length to do the more detailed compression for the saving of 85% of the file size at the end of the day. The other thing I did notice when I was doing the benchmarking is that there's a lot more variation on the image resizing times. So some of them are really slow, really slow. Some of them are very fast. They all produce exactly the same file. So it's completely repeatable. The variance is quite big, even on like the crazy large performance dinos. But equally again, you can see, I think what you're actually seeing in those graphs is the progression of time into the evening, which is why it's going up over the free hobby one times and two times dinos. But it actually seems to be that the resize doesn't really matter about the size of image either. That seems to matter less. So you tend to get more predictable average performance. So does the slower performance matter if you're getting the image sizes down? I guess the answer is yes and no. No because we're using a CDN in the final solution. And that gives us protection against this and the file size benefits are then available to everybody. And they're brilliant. Also no because you can always prepare your CDN. You can run a script that runs all of these things and caches lots of the images before like users eventually hitting them. So you don't accidentally start a denial of service on your own image server. However, when I passed another completely random image from the internet that I'm not at all excited about to the image server put on S3 and try to run it, it made everything fall over. So this image is 6 meg, it's 3,500 pixels wide by 1.4 which is getting to where we are in some of our photography for House Trip. It just made everything fall over, even the powerful performance dinos. So at this point I'm thinking I could throw more powerful CPUs at this problem, I could set up like a graphics Amazon AWS EC2 instance. But whilst that's an interesting exploration for the guy preparing his talk in real life there are already solutions to some of this. So Refile is from Jonas Nicholas, the author of Carrier Wave. It's pretty similar in concept. It's just got the benefit of 5 years of learnings of writing on Carrier Wave and building that out. It's broadly very similar when you throw it in your Rails app your Ruby app and it operates in very the same way. It kind of winds its tendrils deep into your active record models, bad idea in general. Atashe is from a buddy of mine in Singapore Chinkeyat. And behaves a lot more like the solution that I built very simply. It's a lot more fully featured. It does uploads, which I completely threw away for this particular experiment. It's got a really nice API. He's super keen to see if he can build an open source alternative to these third party services. So all of these are pretty decent. They're all pretty relatively super expensive. If you're starting a green field project and you're thinking about images, use one of these. It will take a huge amount of pain out of like three years time use life. They're pretty reasonably priced for anyone who's actually, you know, trying to run a business. You know, Cloud News has got a, particularly it's got a good free plan as well. So in summary, be fast. You know, limit your assets, deliver what is needed to the browser and make it as small as it can be. And, you know, think about what your app actually needs to know about. Like this is one approach to microservices that, you know, doesn't involve queues and stuff like that. It's just utilising the internet for things that the internet actually is quite good at. You know, if I was to speak more broadly than just images, you know, speeds of feature through any software you build. You know, and we are talking speed over the network as much as, you know, speed of your actual code on any machine. And you need to think about your user's devices. And with that, thank you very much. Any questions that I must remember to repeat when you ask them? Am I using the code live? No, is the answer. I think the benchmarking that I did as part of this proved that we'd have to have a slightly more robust solution than my initial, you know, 100 lines of code. It's an interesting thought project and certainly something we're thinking about in terms of maybe breaking out into a slightly more sophisticated app rather than the very simple thing that I built. My recommendations for caching and CDNs do it. Don't serve assets from your Rails app. That's crazy talk. Particularly from platforms as a service like Heroku, it's not so great. Most decent caching stuff, say Cloudflare or even Cloudfront from Amazon, let you expire things on a fairly granular basis. So that's not the problem that it was. So basically my suggestion is just cache for as long as you possibly can. Most of the CDNs are pretty good at providing the correct headers to the browser so that the local browser caches them as well. But yeah, basically do it. Don't serve images from your app. Cheers. Thank you very much.