 I am running a startup here called Dexsecure. We are from Singapore. Started the startup around two, three years ago, and we specialized on web performance. And so that's why I got into web performance. We try to build a lot of tools which automate the process of speeding up your website. And image optimization is like one of the things that we do for you. And that's why I got into image optimization and optimizing websites in general. So just to set the context for the workshop, I know image optimization can be like an arcane subject where there's a lot of terms that you need to understand before you start working with different tools. So my whole point of this workshop is to just give you the context, tell you what the different terms mean, play around with a few tools. And this is my first time giving this workshop. So I'm actually not sure how long it's going to take and stuff, but it's pretty free flowing if you want me to focus on something else or if you want to play around with something else. Just let me know and we can continue from there. But yeah, the general idea is to give you the base. So you won't be able to implement your own image encoder or anything like that at the end of the workshop. But I'm coming at it from a more web developer standpoint. So if you are building a website, if you want to optimize your images rather than just running it through an encoder which compresses all your images to a fixed quality, for example. Like most companies do that, even the big ones, they have probably a build pipeline where they run all images through it, out comes an optimized image which is compressed to the same quality. It's a start, it's good enough for some cases, but you can go a step further. So in this talk, I'll be talking about the different terms, the different image formats, the trade-offs between the different image formats. I will go a bit into what each image format looks from the inside. So you get to understand what are the different compression techniques that each format is using and why it's so important for the different image formats. So yeah, as I said before, these are the different tools that you would require if you're having trouble installing something, just let me know. And I'll help you set it up. Or if there's some other version, I can probably help you do that. And if you have any questions, just raise your hand. Just feel free to stop me if I'm not making any sense or if I'm talking too fast, or just stop me and me. So yeah, why image optimization? So I've been focusing more on web optimization as a whole and image optimization. We have spent quite a bit of time building tools around this. So why image optimization itself? So if you had been to Ketina's talk yesterday, I think she did a good job of explaining why you need to optimize different parts of your website. Images occupy a pretty, if you just look at web optimization in terms of the number of bytes sent, which is not a very good thing to do, but at least you need to see if all your assets are properly optimized. And images and video contribute the most to how big your website is. So again, it doesn't always mean that you can't have a fast website, which is big, but it's one of the basic things that you can do to speed up your website. And it's especially important in countries which have slower internet connections. And every byte that you save for them, your site is going to load faster and they can at least start using your experiences. There's different tools which actually show. Let me see if we can find that out. So I'll put up the link later on. But basically, there is a website which shows how much it costs to download a megabyte of JavaScript in different countries. In Singapore, at least, we don't really care how much for you to download an extra MB. But in different places, it can be as much as like 20% of our daily wage workers hourly rate to just browse your experience. So if you're just accessing your website as that costly, it's not going to be able to even use the website. So that's why image optimization is pretty important. And the size of images, also, if you look at stuff like HTTP archive. So they monitor different trends in the web. And you can usually see that images take a big chunk of how much data is downloaded onto your website. So you can state of images. So you can see that it's the amount of not just JavaScript, like images being downloaded in different scenarios. You can do through it. So what they do is that they run the top 100,000 websites on their platform. So they run a tool called web page test, which captures different performance metrics. And they show how much JavaScript is being downloaded, how much image data is being downloaded over time. So this is a good way to see how the web as a whole is performing and why you need to start optimizing your images as well. So yeah, first, let's start with the theory part. This might get a bit dry. Just let me know if you want me to talk about something else. So I'll just give you an overview of what are the different image formats and what are the trade-offs different image formats are making. So yeah, believe it or not, the oldest image format is actually the GIF. It's been there for 20, 30 years now. And it has a pretty old format. That's why people, web performance people do not usually like the GIF. Because the encoders, it can help close the door. So the encoding mechanisms and the compression techniques used in a GIF, yeah. Oh, more, is it? So this repo is over here. So you can just follow along on your computer as well. Is this better? Is this better? So yeah, this is one of the oldest image formats, like even before the web started itself. And it supports animation. It supports transparency. But the actual algorithms used to compress your data is actually pretty bad. So that's why people, web performance people, do not usually like the GIF. They tend to force you to use other image formats. And I'll talk about this towards the end of the workshop where actually Safari is pushing you to use videos instead of GIFs for, again, performance reasons. So yeah, so you can follow along in this particular link as well. So this is how a standard GIF looks like if you actually open up the GIF and this is from the GIF specification. So it actually starts with a header. So a header is actually this, if you represent in ASCII, it'll probably be something like GIF. Oh, so this is not the header. Sorry, this is the header. And you can see that it starts off with a set of each thing is like a block which represents some part of how the GIF is supposed to behave. So yeah, this is the header block. So any GIF, if you open, it's the first three bytes you're going to see is 47, 49, and 46. Actually, there are two versions of GIF, 89A and 87A. But no one uses this much. I think it's only 89A. And so yeah, so the next part is something called logical screen descriptor. So what this does is that the most important thing here is that there are a lot of different things, but the most important thing is like the global color table. So why this is important is like, so as you might guess, a GIF is made of different frames. In that sense, it's sort of like a video. So each frame can have different colors. And the colors used in each frame can be defined in the global color table. So the one important limitation of GIF is that the color table can only be 256 bytes, pixels bytes. So a GIF cannot have more than 256 colors in it. So that's why it's, this is where like back then your 256 colors was a big deal is like, why will anyone need more than 256 colors? But these days, even like the SRGB range is not enough. People are coming up with wider color schemes called like P3 and stuff like that, which are able to represent a lot more colors. So this actually becomes important because say suppose you're shopping online, right? And you want to buy a shoe and you display the shoe on the webpage. And if it looks different, the color is different when you actually get it, your customers are going to be pissed off. So accurately representing color on the web is super hard. And that's one of the reasons why you also need to be aware of which format you choose. So if you end up choosing the wrong format like a GIF for example, the number of colors used cannot be more than 256. So yeah, this is like the global color table. So what the encoder does is that each time it encounters a new color in the GIF, it adds it to this table. So there are some extensions also so you can define like, hey, is this GIF going to be using transparency? Is this GIF going to be animated? Like you can technically have just a single frame GIF which can be done. So these are the different flags that represent that. So I won't go too much into this because I'll just tell you the stuff that's parts of the specification that's actually going to impact you as a developer probably. So one is the 256 color limit that usually GIFs have. The second thing is just it's bad for performance. If you think of like GIF as, GIF is represented literally as a series of frames. So each image is separate. Like videos, how they are usually represented is that if there's a one video frame, another video frame, the second video frame just encodes the difference between the first and the second because usually videos are smooth, right? Each frame that you see mostly has the same pixels as the previous frame. But GIFs it's not like that. If you have like a 30 frame GIF, each frame is encoded separately which is one of the reasons why GIFs can get quite large compared to video files. So local color table, yeah, it's pretty much the same. You can say instead of using the global table for separate, each frame can define its own color table and it can reference colors from that. So after all this is done, you still have a set of bytes which you can compress. So there is GIF uses this compression technique called LZW, which is a pretty old algorithm. There are much better algorithms these days called T-flat and stuff like that, which is what the other more future formats of images use. The other interesting thing about this LZW is that it was actually encumbered by a patent before. Company called Unix's I think had patents for this. So technically if you are using a GIF during that period in which they had the patent, you had to pay them royalty fees, which of course no one wants, right? So that cost a lot of people, a lot of anger. So they actually had a day where they said like burn all the GIFs on the internet because they wanted people not to stop using GIFs on the internet. So to solve this problem, they started working on a different image format called PNGs, and that's how PNGs came out. And so PNGs like has no, it's not encumbered by any patents, it's totally free. Imagine every time someone encodes an image as a PNG and you had to pay someone, right? So that was the scenario like a long time ago for GIFs. The patent for GIFs has actually expired. So it's no longer the case, but the PNG thing came out of it because of that. So there's this thing called plain text extension, which again you can put text in GIFs, but no browser or image encoder actually supports all of this stuff. So not going to too much into all of this. Comments, you can just put arbitrary data here like generated by Photoshop. You can put your copyright information here, stuff like that. So that's basically about GIFs. So yeah, PNG, so the problems with GIFs basically led to the creation of PNG. People wanted an open format which anyone can use without actually paying people stuff, right? So one of the things that the creators of PNG decided is that they didn't want animation within PNG itself. So they created other formats like MNG, which is now Extent and APNG, animated PNG as a separate format. So they didn't, they were sort of purist that way. They didn't want to mix static images and animated images, which in hindsight is probably not a very good thing to do, but so PNG is also a lossless image format. So there are two kinds of image formats, right? One is lossy and one is lossless. So lossy means that when you take your raw image and when you encode it, there's no information lost. You can recreate each pixel asset where, like, so for example, let me, so if, so suppose the bytes are like 0, 0, 0, 0 and you can represent it as four zeros, right? So this, you can always go back to this from here. So these are the kinds of techniques that lossless compression algorithms use so that they don't lose any information, whether it's color information, any pixel information while the encoding process. So PNG is lossless, so you can keep compressing an image as PNG and you won't lose any quality information. So PNG is also good for stuff where you, the quality of the image matters a lot more. Doesn't mean JPEGs can't have good quality, but there are certain kinds of images which are better suited as a PNG. For example, drawings or like if you take a photograph of whatever I drew over there, right? It's probably better represented as a PNG instead of the other formats. Yeah, so PNG again supports transparency but not animation because of legacy reasons. There's a new format called APNG which recently started getting traction again. Few browsers actually support APNG now. I think Firefox supports it. So these things keep changing from time to time. So, yeah, so Firefox supports it, Chrome supports it. So for animated stuff, yeah, basically the lesson is don't use GIFs, see if you can use animated PNGs or animated webpies or some other format. So the way PNGs are encoded, it's based on different chunks. So each chunk has this same format. So regardless of what chunk it is, it's going to have the same format. This makes writing a decoder for a PNG relatively easier because you can keep adding chunks, right? So there's a length that says the type of the chunk and what's the actual data. And the CRC is like just to make sure the chunk hasn't been corrupted. So it's like a redundancy code. There are different types of chunks. So there are two main types of chunks in PNG. One is called critical and one is called non-critical. So anything that starts with a capital letter is a critical chunk, which means that the decoder will fail if it sees a critical chunk that's not able to understand. Say, suppose you write a PNG decoder and you don't implement decoding this particular chunk, right? And if you get a PNG image which has this, it's going to crash. So yeah, so there's a header chunk. PLT shows the palette. So palette is similar to the global color table engines where it just documents all the colors in the PNG. So every time there's a green, it can just point to the palette, says, hey, I want the second color in the palette. So it's just easier to encode colors that way. Okay, so yeah, IDAT is the actual image data of each PNG image can be split up into different chunks. So that's the actual image data is stored in IDAT and INDIS basically shows that the PNG is finished, like it's then. There are a lot of ancillary chunks as well, which I won't get into. So yeah, again, for performance, this is the main thing that you need to think about. Like if you encode an image as a PNG, you need to think about what type of PNG it is. So there are different types of PNG. So one is called the indexed PNG. So indexed PNG is a PNG where each color is stored in the color palette. So every time, as I mentioned, the green, it'll just point to the table saying, like, hey, the green is stored here. And so this is the type of PNG and it can also have different bits per channel. So bits per channel is basically, you can say, for example, let me see. Say, suppose you want to represent black and white, right? So you can represent black as zero and white as one. So this is just represent using one bit to say if it's black or white. That means you just have very less color variations. You can also say, if this was, say, two bits, you can say this is a light black, this is a slightly darker one. And so this gives you three variations between black and white. And so usually color on the web, they have like, you might have seen this, right? 255, 255, 255, so RGB. So you can represent 200 shades of red, 200 shades of green, 200 shades of black, sorry, blue. So in this case, this is four bits, four bits, four bits. So for each of these types of PNGs, you can have different bits. So depending on, again, so if you want to say, compress a PNG, you can change it between these different formats. Say, suppose you just have a black and white PNG. You might be storing it as a true color or a true color and alpha PNG. So by converting it to a grayscale PNG, you won't lose any information. Again, PNG is lossless. And you can encode it with fewer bytes because you're not using the red channel and the green channel and so on. So you can use it, you just save it as a grayscale PNG and compress it better. Same thing with alpha. So you can have a transparent black and white image. This is having a true color is basically having all the colors available to you 255, 255, 255. That's quite a number of colors. So you can also have four channels instead of a true color is RGB, three channels. If you want to add transparency to your image, say, suppose your PNG is not transparent, you don't need to save it as a true color and alpha channel. You just need to save it as a true color channel and that's giving you lesser bytes to encode. So we have the few other things that PNG uses internally that I thought this was interesting. So basically PNG is, as I mentioned before, right? The video also tries to predict what the next frame is going to be depending on the previous frame. So PNG also tries to do that. So if you look at a PNG image, most of the colors are going to be dependent on what color is surrounding it. If you take a photo of this room, for example, it's like all whites are here and there is probably the glass behind it. So that's probably one color and the marker is all black pixels together. So what this is trying to do is trying to predict a pixel from its surrounding pixels. So there are different filters a PNG can use. So this is another way to compress PNGs where a PNG might be using one particular filter called XA and probably by encoding it in another filter, it might be smaller. So for example, if you take average, right? So what this does is that it takes the average of these two pixels and so 63, 55, let's see what the average is. So yeah, so it's 59, right? So you can see that, oops. So you can see that this pixel was exactly 59. So it became zero over here. So again, by taking average of this and this, it was that this is the difference between the average of the left pixel and the topmost pixel. So the point of this is to make the number smaller. So each pixel, you would ask like, hey, I'm storing the same, it just looks like different numbers, right? But for data compression, what you try to do usually is that you try to make the number smaller and second thing is you try to make the numbers same. So for example, like that four zeros, it's easy to represent it as four comma zero eight because it's zero repeated four times. So that's the same logic with PNGs. Instead of having random pixels there, it tries to predict what the next pixel is going to be and try to get as many zeros as possible and not necessarily zero same numbers as possible. Once you get that, you can use different kind of encoding mechanisms like Huffman encoding and stuff like that to compress it further. I think I'm just talking a lot of theories. So let's just work on some. So we'll get to all this stuff a bit later. So let's start off with some exercises to explain some more concepts related to JPEG and stuff like that. So what we can do is, so JPEG has some, as I said, JPEG is a lossy format, right? So every time you encode a JPEG, you're going to lose some information. So it's called generational loss. So what this means is that every time you take, say you upload an image to Facebook, right? Facebook encodes it as a JPEG, does some compression and say you download it, then you again upload it somewhere else, say to Twitter the same photo. Again, there will be some loss associated with it. So as you keep doing this, the image will look worse and worse every time. So ideally in your workflow or in your process, you should just be doing this once. Any lossy operation on your image, like say converting to JPEG. So even for example, resizing an image is lossy. So you might think that, okay, first let me convert everything to JPEG, then let me resize, then probably I'll rotate it by 30 degrees. Each of this step is a lossy operation. So which means that every time you do a step, you're going to lose some information. So that's why you need to see if you can club different sort of operations together. Different encoders let you do this too, so that you're just encoding it once and just saving it. So there are, so we'll be trying this out, but I just wanted to show you the end result first. So what's happening here is that he is basically encoding the same image and these are different image formats. You might have heard of WebP probably, JPEG probably, Flip and BPG are newer image formats, but they're not supported by any browser. So you probably wouldn't have heard of these two formats, but WebP is supported by a lot of browsers, modern browsers and JPEG as well. So what's happening here is that you can see the quality for each of this algorithms and he's encoding the same image, it's called generations, I don't know if you can see it here. So this is like the same image encoded 88 times and you can see, I don't know how pixelated it's going to be, but you can see it in your own computer. That the quality of the, you see the WebP image, it's become like so bloody, can't even, we can barely recognize it. So these modern formats are supposed to be better at these sort of generation losses, so that's why the creator of this format is basically showing off that his format is better basically. We have, you can see the WebP becoming more and more granular. I'll explain why that happens a bit later, but sorry, JPEG becoming more and more granular. So yeah, what we can do is like, I just have the instructions here, which will, you can take any image you want and we will, what we'll do is we'll encode it multiple times and we'll just create a video out of it to see how this actually works. You can see that, okay. So let's, has everyone installed this tools? Anyone has problems with these tools? For the first one, you'll mostly just be needing image magic and ffmpeg if you want to generate the video and just see how it looks. So, or if you just want to look at the code, it's basically in the exercises folder and it's called inside generational loss. So this is the script that I used to basically generate, encode the same image again and again. So I'll just show you how it works. So you can follow along, just go to the exercise folder and just run the script basically, it should work. If you have any problems, I'll be doing it here as well. So you can look at it in your own computer and follow along. So I'll be using the Docker image to, anyone has problems installing any of these tools? Image magic and ffmpeg. So this one, you just need to change it to where your image actually is, right? So what this also does is that it randomizes the quality between each iteration. So instead of encoding this image with quality 80 all the time, what it's going to do is just going to pick a quality. I've just set it so that it takes a random number between 80 to 90 and encodes the image with that. So yeah, you can see that all these images have been encoded now. I need to see how to, so I just took a random image of the internet and so this is the original image and you can see how it looks after 100 encodings. You can already, oh wait, let me press this. I'm switching between different, the both images. You can already start seeing the grain starting to appear here. Oh, this looks really bad. So once you have these images, basically you can make a video out of it as well, which is what this is supposed to do. So I just copied this command out of some Stack Outflow answer, but basically it takes all these frames and makes a video out of it together. You can choose the frame rate to say how many of these frames need to be put together in one second. Let me see if we can do that. Oops, I have it. Okay, let me install it. Looks like Docker image has it here. So it's creating the video, oh, it's done, okay. So while setting up the Docker image, what I'm doing is like I'm mounting a volume to it as well so that I can just take files out of the Docker image onto my local machine so that I can view videos and stuff like that. So what I'm doing is I'm mapping the slash volume folder to a local folder onto my local, one of my local folders. So just change this to wherever you want the shared folder to be. And anything that you put in slash volume will be copied to that. So let's see if the, yeah, so this is the video that it generated. I don't think it's very clear over there, but yeah, I think you can see. So you can try this out with different images, see how it works, and you can play around with the actual number of variations. So yeah, here I'm just going from one to thousand. So see how this behaves when you increase it to, so the video that you saw had 9,000 generations. So you can see how it affects. I'm choosing a random number between 80 to 89. See how it affects by changing it to say from 50 to 100 or stuff like that. There are a few interesting things that I found out while doing this. So one is that you would think that compressing and JPEG again and again would make the quality lesser fine, but you'll at least expect the file size to also go down every time, right? Because quality is going down and you'll be thinking that, okay, it's becoming smaller at least, but that's actually not the case. So if you look at the initial file was around 141 KB. So it does go down for some time and after that it says 79 KB, 83 KB, and it just stays sort of over there. So towards the end, it becomes 85 KB, but you can, which is generation 1000, but you can see like a generation 18, it was 74. So the image is actually becoming bigger as well in some cases. So that's why it's bad to encode the JPEG again and again. One is the quality becomes lesser, your file sizes aren't actually becoming smaller, and yeah, it's a lossy format. So that's why it's not recommended to do something like this. And what you'll also notice is that after some time it reaches a particular file and even after it sort of reaches an equilibrium after a point. So even if you encode the same image again with the same quality number, there will be nothing happening to the image. It'll remain the same. So the encoding process doesn't change anything. So that's why I'm randomizing the quality here as well. So you can try it by not randomizing the quality. So just run it with just quality 80 all the time and you can see what happens. You can try that out. So instead of passing the quality variable, if I just hard-coded to 80 or something. So you can see the image actually doesn't change at all. It's 74 KB all the while. Yeah, so that was, and you can find why exactly that happens. So to understand why that happens, you probably have to unfortunately ramble on on how JPEG works, but so you can see other videos he has tried it out for as well. So I'll explain what this PSNR is later on. It basically shows how different two images are. You can see where P is barely recognizable. JPEG becomes super grainy and of course, flip does well. So BPG also seems to be doing well here. And you can notice the size also. It just jumps all over the place. It's not exactly reducing with each time. Okay. So I'll talk a bit about, so I've talked about just, I've talked about PNGs. I'll talk a bit about JPEGs as well now. There are different components to how a JPEG works. And a lot of it is important to why, understanding why, for example, if you encode a JPEG with the same quality, nothing changes or like why it reaches an equilibrium at the end. So one important thing which you might or might have not heard of is called chroma subsampling. So, okay, okay. So I'll try to explain this. If it doesn't make sense, just stop me at some point. So chroma subsampling is basically a way to reduce the amount of bytes stored in an image. There are most modern formats use some form of this like JPEGs, WEPI. Yeah, PNGs are lossless formats, so it doesn't use this. So any lossy format uses this. So the basic idea behind this is that the human eye is more sensitive to changes in brightness than changes in color. So if you, I don't know if you remember like your biology from long time ago, but basically the human eye has something called rods and cones. So rods are, cones are sensitive to color and rods are sensitive to the brightness. So what happens is that in our eye, that is a lot more rods than cones. So that's why we are way more sensitive to changes in brightness than changes in color. So what these modern algorithms try to do is that they try to exploit this information to get away by storing lesser information. So since we are not that sensitive to changes in color, why do we need to store everything, right? So we are trying to basically remove some parts of the picture, color portion of the picture and see if we can get away with it. So that's what JPEG does. And to do this, you might have heard of RGB color space. So color spaces are basically how color is represented by computers. So you can represent a color, so that's what is basically here. So you can represent any color with the primary colors, right? Red, green, blue, and as a mixture of all of these three. So these kind of color spaces are called additive color spaces, which means that you're adding multiple channels to generate a new color. So this thing called YCBCR. So Y stands for luminescence. CB is the blue channel and CR is the red channel. So JPEG uses, doesn't use RGB, it uses this. Y is basically because it's trying to separate the brightness information from the color information. So then it can use something called chroma subsampling, where it just keeps the brightness channel intact, but it optimizes or compresses the color channels a bit more. So it's trying to optimize account, like take leverage of the fact that I told before, where we are more sensitive to color and not brightness, right? So it's like, sorry, it's the other way around. We are more sensitive to brightness and not color. So we are trying to retain the brightness channel as it is and compress the other two channels. So if this, yeah? How do you reflect them in this? Which one? We, I mean, blue and red. So that's the thing. So all possible colors can be got through just by mixing these. So it's basically a simple mathematical formula to convert. So suppose you have an image which, in RGB, each pixel is 255, 160. To convert it to YCBCR, you just need to follow this math formula. It's just matrix multiplication with some constants and stuff like that. But basically it converts this from one space to another. So using these three channels, you can get all possible, anything that you can get in RGB, you can get it in this color space as well. So for example, if you look at this, this is the original image. This has only the Y channel in it. This has only the green and this has only the red. So if you see that most of the detail is actually captured in the Y channel itself. So these add very little information to the final picture. So that's why what we are trying to do is like what Jepeg is trying to do is it's trying to keep as much of this intact and it's trying to remove pixels from wherever possible from these two channels. And how that pixel removal happens is what is defined by chroma subsampling. So there are different modes of chroma subsampling. And so I'll talk about a few of those. So again, I've posted a link to this particular guy's article here. So he does a decent job of explaining. Or you can just go to the, okay. So this is, so these are the different modes of chroma subsampling. So say suppose you have eight pixels, right? And you, the four remains constant. So four is just tells the number of columns that are there in the image. So any Jepeg image, what's first is done is it's split into these set of these chunks and each chunk chroma subsampling is applied to each chunk. So I forget the first number, it's always four. The second number basically represents the number of colors in the top row. And the last number represents the number of colors, new colors in the bottom row. So suppose the original image looks like this, what we are trying to do and again this is only applied to the color channel, the white channel is not touched at all. So the color channel, if there are, there's so many colors like this, what we are trying to do is we are trying to sort of cheat by just using two of the colors. So we are trying to take the first color and using that for all four pixels. You're taking the second color here and using it across the lower row fully. So that's what four one one does. Four two zero, this becomes this where you sort of take this color applied to here, this color applied to here and it becomes. So two is basically number of colors in the top row, zero is the number of new colors in the bottom row. The notation is a bit weird, but yeah, that's how it is. Four two two is like the two colors in the top row, two colors in the bottom row. So if your original chunk looks like this, this is what it becomes after chroma sub sampling. And the cool thing is like after you do all of these changes, the image actually looks, doesn't look that different to a human eye, which is where you get the savings from. So the most common mode is four to zero. So if you look at the images on the internet, either they are not, so that's what this study is about. So what this guy has done is, he has looked at the top, I think one million sites or something, again using the HTTP archive project that I mentioned before. He's downloaded all the JPEC images over there and he's looked at how many of them use chroma sub sampling, how many of them don't and which modes are being used. So yeah, it's the same thing that I've been explaining. Same thing just in more complicated terms, I guess. So this one is not using any sub sampling at all. So people are leaving extra bytes on the images just like that, 60% of the images on the web don't use any sort of chroma sub sampling. This one is four to zero, I think, which I showed you before, which is, yeah, which is this. So this is the most common one. There are some few weird ones like, he found out like four, one, one, only 163 out of the million images use this. Two, three, one, so you can't even represent that in the standard A comma, A colon, B colon, C format. So yeah, the question is like, so why doesn't everyone use chroma sub sampling, right? Like, one is that people may not know about it, but the main thing is chroma sub sampling can lead to some dithering effects or like the images getting a bit blurred. Few cases, again, as I mentioned, JPEG itself is not very good for where images, your images have text in it. So as I mentioned before, you take a photo of something that a handwritten drawing or like medical images or like, that's a sharp contrast between. So the main thing here is that, okay, both look pixelated here, but, so the thing is like, this is supposed to be a straight line, right? So if you optimize it as a PNG, you'll probably get a straight line. But because of the chroma sub sampling where we are just taking out colors randomly and just copying the same color over, so each chunk has that jagged effect. So that's why chroma sub sampling is not good in all cases, but if for photographic images, say suppose you're taking a picture of a mountain or like anything that you take on your phones, mostly you shouldn't be able to notice the difference. But in cases like this, there can be a problem. So this is one of the challenges that we encountered as well when we were building our optimization pipeline. So we tried to do all of this automatically so that developers don't need to think about, hey, should I use 420 or 444 and stuff like that? So what we do now is we actually look at the image and we categorize it into different buckets and we look at, hey, if this image has text, okay, probably don't, first try not to use JPEG itself to use a PNG. But if you are not able to do that, then we'll automatically choose a more, why does it keep disappearing? We'll use a safer subsampling method like 444. It's, we'll try to disable it fully or what we do is this 444 is basically, it's not disabling, it takes in all the colors into effect rather than 411, which basically leaves out 50% of the data gone, right? So that's one of the things that we do to counteract effects like this. So he's basically found that using a lot of statistics and stuff like that, 25, yeah, so he's found that on average, your images can become 17% smaller if you start using chroma subsampling. But again, use 420 should be good enough in most cases, but in some cases it may not be, so you need to be a bit careful there. But in your use case, if you don't have images with text and stuff like that, you can get away with it, okay? So let's, so that's one feature of JPEG. I'll talk about honor feature, but let's do an activity first. It's called progressive JPEGs. Have you heard of this term before? So progressive JPEGs is basically where, okay, let's first see it in action. So you don't need anything to get started with this. It's just a, just go to the exercise folder and, okay, so this is image of a lizard that I just got from somewhere. It's pretty big because I wanted to show what exactly means. So let me try to load this image with a fast 3G connection, right? So you can see how the image starts loading top to bottom where like every line as it's being downloaded, it shows up on the screen. So this is called a non-progressive JPEG. What it means is that as the browser gets more and more data, it's able to show the image bit by bit, but there is a different way to encode your JPEGs, which is much more user friendly. So that's called progressive JPEGs. So let me show you how it works. So when I load this image, what you will see is that see how the image loads compared to before, right? So you can see it's still loading, the image is still loading, but you get a much lower quality version first of the entire image and slowly it sort of upgrades to a higher quality version. So I'm not using any fancy JavaScript, CSS, Playsolders, nothing. It's just the way in which images are being encoded. Yeah, so if you can go... So this is a really big image. So that's why what happens is that and I'm using a slow internet connection. So what you can actually see is that the first layer you see is probably actually be just a black and white image. So you can see how the first the black layer comes and then the color starts loading. And so this of course is going to be much better, especially for people with slower internet connections who aren't able to... You just want to let them know what is there at least, right? And once they get an idea, they can either wait for the full image or interact with the rest of your website and stuff like that. So that's the thing with progressive JPEGs. It's very simple to generate progressive JPEGs. I have the command somewhere, but if you have image magic installed, it's basically you just need to add... Oh, I think it's there in the read me. Yeah, so this... If you have image magic installed, you can try it out for some of the images that you have locally. So you just need to add this option called interface plane. So what this interface does is that the way the JPEG is encoded, it's not encoded in the usual way. It's encoded as a series of lines and it's not encoded top to bottom. So the first line is encoded, then the fourth line is encoded, then the eighth line is encoded, stuff like that. So it sort of skips the data in between so that people, the users, get like a holistic view of the image and as more and more data loads, the rest of the image sort of gets upgraded like that. And the other cool thing about this is that most cases it's actually smaller than the normal JPEG. So in most cases, if you encode it progressively, it actually turns out to be smaller than the non-progressively encoded JPEGs, except for very small JPEGs where you are better off just encoding it non-progressively. So this is why sort of image optimization is hard, right? There are a lot of these trade-offs which work only in certain scenarios and you need to, I'm just telling you all of this and you can go back and see which one applied to your specific use case and work with that. So you might figure out, okay, probably your website doesn't have any images which are less than say 30 pixels or 30 pixels wide. So you can just encode everything progressively, right? So that's something that you can play around with and check. Mostly everyone agrees that this is a better user experience than the not encoding it progressively. How many of you here think that the older version was better, like where it just loads line by line? How many of you think the second one was better? Okay, so yeah, there is one pretty controversial research that was published by a company called Radware where they actually had the reverse conclusion which I find very hard to believe. Yeah, I don't know why someone will say that the first one is better. So, but yeah, pretty much everyone at this point discounts that particular research. The only serious consideration is that the decode time. So a progressive JPEGs take much longer time to decode on the client side. So this is because like every time a layer comes, the browser needs to decode it. So you'll find multiple decode operations as the image starts loading onto the page. Again, so this is something that you need to look back, okay, what percentage of my users are on really slow hardware devices. Probably I won't send them the progressive JPEGs, right? But again, those are the people who might also be on a slower internet connection. So for them, this might be important. So these are trade-offs that you can start thinking about like how to, should I encode my images progressively or not? Most cases, yes, you would because the trade-off between the slightly higher decode time versus the smaller images and the better user experience would make it, make sense to encode in progressively. So there's this other, do you guys want a break or something? Like just to go to the restroom or something like that. Let's start another five minutes. Let's take a break for five minutes and to convert your images into progressive, do not use the image magic command that I gave. So it uses this library decoder called libJPEG, which is by default probably installed in a system and it's probably not as optimized. It's a pretty old library. So what Mozilla has done is that they have come up with a new library which is supposed to be compatible with the older one called MossJPEG and it's a much better encoder for your images. It's still a JPEG at the end of the day, so it's not a new format. So all these different browsers are coming up with these new formats like Google Chrome has WebP, like Safari has JPEG 2000, Edge has JPEG XR. So everyone has their own format. All these browsers are fighting with each other in some sense and what Firefox did is like, hey, I don't want to come up with my own format or pick off any of these formats. It's sort of political as well, but that's how it is. So what they did is like instead of investing the resources into building a new format, they started trying to make JPEGs itself better. So that's where this encoder has come. So it's called MossJPEG. There's a command called CJPEG, which you can use to encode your images progressively and it does whole lot of things behind the scenes which you probably want. So yeah, use this for creating your progressive images instead. So I'll talk about XF data before we move on to something more, a bit more JavaScript-y I think. So most images allow you to store some sort of comments or content on into your image itself. So JPEG has two parts to it. One is called the container and one is called the actual image by stream itself. So the container which is mostly used is called XF. So XF provides a way for you to store some information about the image inside the image itself. So it has a place to store where the photo was taken, what date was it, it's used, like what camera settings you use, depending on your phone or the device that you take, the actual guy who's encoding the image can use this to store information. But the thing is when it comes to serving images on the web, none of this stuff matters. Like it doesn't matter where you took your photo, I don't care, I just want the image to be small, right? So most of the stuff in the image XF data can be removed safely. So I actually did an experiment where I looked at again some 1 million images or something like that, yeah, 1.6 million images and found that around 16% of the data in an image is usually totally useless. It's this XF comments which you can mostly just remove. Only 84% of the actual image is required to show the image onto the screen. So these are different formats, as I said, XF is one way to store comments in your picture. There are other formats like XMP, ICC and so on. So I also plotted what exactly people are storing in these comments. It's mostly like software, like what software is used to, say suppose you're optimizing a Photoshop or something, Photoshop would add this by default inside it unless you click on, there's an option called Save for Web, which actually removes all this stuff. There is date, resolution, image height, image width, color space, model flash, like really like, if you use flash or not to take the image, like it's totally useless when you are serving images for the web. When you are storing it locally, yes, you might need information like this, but when you are just storing something on the web, it's completely useless. There are a few interesting cases where it can also actually store a thumbnail inside a comment. So you can store a smaller image within the bigger image. So let me show you how it works. So again, if you go into the metadata folder, so this is an actual image I found while doing this analysis. So this is the image, I don't know which website it is from and stuff like that. Probably some of you recognize that I have no idea. But if you look at it, the actual image is 18 MB. And yeah, so what it's doing is also, it's storing a thumbnail with, one of the reasons why it's so bad is that it's storing the thumbnail of the image within the image itself. So this is the thumbnail that I extracted from the image, which is basically a small, of course a smaller version of the same image, but yeah, there are other better ways to show a thumbnail, like as I mentioned, like the progressive JPEGs is one way to do it. There I'll be talking about another mechanism called LQIP or low quality image placeholders. So that's another way to do it, but this just seems to be like a waste of space. And so how you look at, suppose you want to see if your image is having any metadata or not, this is how you do it. So this comes with image magic, the command is called identify. And I think it's called verbose. And so yeah, it's taking so long for this image magic to run. So you'll see a lot of this information, the different channels, RGB channels, hopefully some of this stuff makes sense. So the quality, so this, all of this stuff is what is stored in XF. It's showing the date, resolution, it's created by Ardhav Photoshop 2015. Yeah, you can see the subsampling factor that's used. You don't need to store it as a command. There's another way, actual JPEG itself will have what subsampling factor it's using. So you can see there's a APP1 marker, which is probably the biggest one that I've seen. So yeah, you can either look at this or there's also a website where you can just upload an image and it'll tell you all the XF data that it has. So if you look at in my GitHub repo, let me just upload that. So yeah, if you want to look at it in the terminal, you can look at it. So this is one way to see if like, you can download an image from Facebook and see if it's actually taking off all these metadata or it's actually storing the information in its images. So you can look at, so this is the same stuff that we saw in image magic, but it's just has a UI in front of it. You can see the, what all information is being stored. Yeah, please. So the other thing is like, so not only for performance, but this can actually be damaging for other privacy reasons as well. So okay, before you get into that, so some people store their copyright information over here. So it's can say like copyrighted the person who took the photographer and stuff like that. So this might be a valid use case because if you don't want to put that information within the image itself, you can put it as a XF comment here. So this is a legit use case. Orientation, some people, some encoders actually put whether it was taken in landscape mode or portrait mode in the XF parameter. It sounds like a good thing to put because suppose it was taken in landscape mode and you're viewing it in different mode, it can, some browsers can automatically rotate based on that. So even if you have images rotated, it'll re-rotate it in the right direction and show it to you. The problem with this is that it's very inconsistent. Some browsers do it, some browsers don't do it, some browsers do it on desktop, some browsers do it on mobile. It's messy everywhere. So the best thing to do is to just encode it in the right format and then just save it once. Don't rely on this parameter to help you out. So what we do again is like before we take out the XF information, we actually re-rotate it to the right orientation and just throw out this parameter completely. So the other important thing in this XF thing is like called color profiles. So color profiles, you shouldn't throw it out indiscriminately as well. Lot of the encoders like optimization software do it, but you shouldn't be doing it. So color profiles is basically showing, telling your screen monitor how to display the image. So each monitor will have different settings, different color spaces and stuff like that. So it's basically telling the monitor, like, hey, this is how this color should look like. If you take this out, the monitor will, some monitors assume that your original image is an SRGB image and it'll assume that the first one is red, second pixel is green, third pixel is blue. It may or may not be the case. Most images are SRGB, but with your more wider gamut monitor. So you might have seen like some Apple keynotes and stuff like that. They might say wide gamut display and all of that stuff. Basically what it means is that the color, the settings of the screen is not SRGB. It's able to show much wider colors on the screen. So if the color profile, if your image is shot in a device which understands this and if you show it in a normal screen without this color profile, the colors will look different. So this is probably the, again, if it's SRGB you can throw it off because most devices assume it's an SRGB image, but for other profiles you are not supposed to take this off. So again, this is one of the things that we handle where we see, okay, there's a color profile and it cannot be thrown off, you just keep it there. But all this other information we just throw it off, like orientation, whether you're using flash or what software it has encoded in, doesn't matter. The other disadvantage is not just the file size is bigger but also is that JPEGs, the way they are encoded, the height and width of the image is within the JPEG itself. But the thing is it is after all this exf data. So as the browser starts downloading the image, it needs to lay out the page so that, okay, as the image is being downloaded, it will, browser will see, okay, the width of the image is 300 pixels, height is 300 pixels. So let me rearrange the screen so that the image can download. But the earlier that the browser sees this width and height information, it's good. So that this layout can happen. But if your images have a lot of exf data and if it's not, so when the first few bytes are sent to the browser, it is not able to do the layout because it doesn't know how big or how wide your image actually is. So this is another reason to remove exf data, not just the size reason, but it also takes longer to lay out and render. Security also is a problem if you don't handle this properly. Yeah, this was one hilarious incident where John McAfee, you know, I'm not sure if you guys heard of him. I can see some people laughing. So yeah, this guy, yeah, he's notorious for a lot of reasons. He's founder of the McAfee software. So what he did was that he posted a picture of himself online and he said he was in Cuba or something, but the photo had the location information of the photo stored in exf. So even though he was lying and saying that here, I don't know where he said he was, but it gave away his actual information. What people did is that they downloaded the photo, looked at it in image magic or like wherever you want to hit and they found that the place that the picture was taken or somewhere else. And then the police were able to follow up with him or catch him or something. So yeah, this is one, they narrowed it down to exact location of GPS location of where he is. So these days, yeah, not even like around four, five years ago, most social networks didn't used to do this, right? Like when you upload a photo, all this information which your photo has just gets uploaded and it's just stored like that. These days I've checked most of the major social networks and they actually remove this information before storing it on their servers, which is good. But yeah, if you are using some lesser well-known site where you are uploading your photos, you might want to download and check if they are actually stripping all this off or like if all this is still there. Oh, this was another hilarious one, so. So Apple again, you might have seen these campaigns like a shot on iPhone and stuff like that. So they actually found out that the photo that they uploaded was actually edited by some Photoshop or something as well. And that information again, as we saw it's in the software attribute in the exit of this thing. So even though it was shot on iPhone and stuff like that, they actually edited it before putting it out. And like, yeah, of course, people on the internet were able to find out. Oh, they also had like entire comments like this. So they had comments like, hey, darken this. Like, there are some guy giving instructions to an editor saying like, this is how the perfect photo should look like and they are actually doing stuff like this. Stars can be sharp. So yeah, so this is one of the default backgrounds in Mac. They probably have fixed it now, but yeah, so this is exit stuff. So let's, how many of you here have used service workers before? You know what service workers are. I'm talking about JavaScript now from images. Okay. So let's work on some. Okay, so one of the things that we also do is like, we try to figure out how to optimize images depending on which, what's the speed of the connection that the user is using. So if he's on a 2G connection, you probably don't need the pixel perfect 1MB image, right? And if he's on a fast connection on a good Wi-Fi on your monitor, you probably want him to get the best possible image. So one way to do this, I'll explain this. It's one of the exercises as well. You can try it out. So it's, I'll give you an overview and then you can see how it works. So there's this thing called the net information API. Unfortunately, a lot of stuff that I'm going to talk about in this article is only Chrome, but yeah, that's how it is. So the net information API tells, gives which connection the user is on to the JavaScript. So you can find out like he's on a 2G connection, 3G connection, Chrome, Vortices, Bandwidth and so on. So using this, we are trying to figure out like, can we then give different resources to different people on different connections, right? We also have this concept of optimization modes. This is something that we have implemented, but you can implement something like this on your side as well. So what this means is that we have different modes that you just add opt equal to mild at the end. So we'll only do lossless operations. Then we have something like default aggressive. Well, if you want, you're saying aggressive, we'll do a lot of stuff that give you a smaller size, but you're saying that, hey, I know what I'm doing, right? Something like that. Service workers, I'll just give you a brief intro of how service workers work. This too complicated, but okay, service workers are basically, it's a piece of JavaScript that's like a proxy on your client. So they're able to intercept any outgoing request from your browser and do something with it. Primary use case that you might have heard of is like making your website work offline. So which means that if a browser tries to make a request and there's no internet connection, you can intercept that request in the browser itself and do something with it, right? You can serve a cached page, for example, or I think some newspaper, I forgot which one, they actually had a crossword puzzle for people to play when they found that there was no internet and they clicked on a link which didn't work. So at least people can stay on their site longer and once the internet comes back on, they would refresh the page and show you the actual article. So you can do stuff like that, but what we are using it for is basically getting the network information through the NetInfo API and calling the executor with a different optimization mode. So if it's on a slower connection, we want to call it with aggressive. If it's on a fast connection, probably I just want to do lossless optimizations and call it with mild, right? So this is the script. It's pretty simple. What we do is like, so service worker has this thing called fetch event which gets triggered every time or as a network request. And yeah, that's about it. So we add question mark this. So this is not production ready. I mean the service worker script, you need to handle a lot more cases, but I just wanted to show you what was possible and we can try it out. So if you go into the workshop page and now this is on the set. So you can code this yourself or just try to run it on a local server. So let me add the picture here. So the HTML itself has pretty much nothing much in it. So you can play around with this. So the HTML just registers the service worker. And it has a link to the image tag. And the interesting thing is this service worker which looks at what connection type the user is on. So let's look at that. So it's navigator.connection type, connection dot. So apparently I'm on a 3G connection. So these are not very accurate numbers for privacy reasons. Chrome doesn't exactly tell you what bucket you are on, but it's a rough estimate of what connection you are on. So you can say the round trip time is 600 milliseconds. So this is how long the round trip time takes. Save data mode is a mode which you can, some people can enable, I think Android phones only have this where you can just enable the save data mode and you can see if the user is probably on a metered connection and stuff like that. A lot of people on developing countries have this enabled to save data, right? So you can take better, more intelligent decisions of what image to serve based on information like this. So I'm just looking at effective type in my service worker, but you can probably use a combination of like, if he's on a 3G connection and his bandwidth, his downlink is more than 1.5 Mbps and he doesn't have save data on, then I'm going to send him the high quality image. So you can do a lot of heuristics and figure out what works for your own use case. And yeah, the other thing is, yeah, of course you need to implement something like this on your own image servers, image pipelines that you have, where you need to probably ask for more aggressively optimized image or like less one and stuff like that. So I can actually show you how it works. So say I'm not throttling anything, so the request for optical to default goes, but say suppose I slow, make it a fast 3G, still default slow 3G, okay. So yeah, this one it's switched to optical to aggressive, where it knows that I'm a really slow connection. So the image that's gotten back from the server also is going to be more aggressively optimized. So you can play around with this, there are a lot more stuff that you can do. For example, you can check if the user goes from, if the person is on say 3G environment doesn't necessarily mean he's always going to be on a 3G connection, right? Say suppose I'm here, I walk outside and I lose the Wi-Fi connection and I switch to a different network which happens all the time. So you can listen for changes in network events and react to that accordingly. So this is where the service worker cache also comes into play. So here I'm not storing anything in the service worker cache. So service worker cache is a different layer of cache which sits on top of your browser cache and you have programmatic access to the cache. So the browser cache you have no control over how it works. It may be there, may not be there, but the service worker cache gives you more guarantees that you can say, hey, put this image at this point, this particular cache and you can get the image or any asset basically out of the cache anytime you want. So stuff that you can do is like, can probably have different caches for like one separate cache for 3G, it's another cache for 2G or just say highly optimized and not highly optimized, right? So you can have two caches and you can check if a user is on a low bandwidth connection check if the highly optimized, low not so highly optimized version is there. Like if it's already there, you may as well use it, right? If the user already has a high quality version of the image, you can use it. If it's not there, then you can fall back to the low quality version and if that's not there, you can fall back to network. So these are the kinds of rules that you can write in service workers because it gives you full control over how assets are being optimized. You can probably try that out, see how I don't have the version of the service that I've just mentioned. I didn't have time to fully finish it but you can try it out. It should be too hard to implement. So there's also this other things called connection aware components. So what we are doing this, I showed you an example of how to do it at the asset level, right? But what if you could build this into your own react component itself? Like each component is sort of responsible for how to behave depending on which connection the user is on. So yeah, this is the function that sort of gets called. You can register a call back whenever the connection changes. Yeah, the support is pretty bad. It's basically only Chrome. But again, all of this can be progressively enhanced, right? Like if this feature is there, you may as well use it. It's not that just fall back to your default experience. So yeah, this is one case where you built a react component with an image where I am handling it at the service worker level but here the component itself is handling it. The component itself looks at okay, what connection I am on and based on that it automatically makes the request. So this is handling at the component level which might be fun to play around with as well. Um, let me show you the... So other thing is called low quality image placeholders. So this is basically to show something to the user before the full image loads. So you can show a very generic placeholder but there's this new thing which basically generates a similar-ish looking picture. Okay, actually let me show you. I think Google also does this where... So I don't think it's very clear here but what happens is that they load a low quality version of the image first and the background sender requests for a higher quality version and once that's flipped, it gets the... Oh, actually I have a demo for this as well. So if you go inside the LQIP folder, you can run this. Oh, okay, this is not the one. So what this does is that, so we have implemented it on our servers as well. So if you just take any URL that's coming from our servers and just add this question mark format equal to LQIP, what we do is we basically run the algorithm behind the scenes and give you a placeholder for that particular image. So this is the image and the placeholder is this. You can see it sort of looks the same, right? But the thing is that this is just one KB, the entire placeholder. And let me see if... Oh, my service worker is registered as well. So yeah. So this is basically a place64 encoded JPEG that we just generate on our site and just send it to the user. So you can actually install the module and try it out yourself. So the repos here, so LQIP. And so you can just give it a path to an image and it will generate the placeholder for it. So what stuff that you can do is like you can have this as the placeholder if the internet speed is not fast enough. Show it there, it's pretty small. The generated file and you can replace it later on. So this also is using, so this uses the technique that I was talking about. I should probably add the links to it. Oh, okay, so I'll add the link later, but basically if you just go to playground.execuer.com slash image decode sync example. So this is how we have implemented it. So you just take the URL of the image and you just put format equal to LQIP. So the original image doesn't have the format equal to LQIP. And you add it to get the corresponding LQIP version of it. So if you want to do this for a different image, so I'm just dynamically proxying the thing through our servers. Hopefully this works. Okay, it's generated, but it's too small. So basically once you scale this up in your HTML or CSI, I'm not sure why the resize thing isn't working yet, but. So yeah, you can dynamically generate low quality versions of your image also through this. And you can implement it as part of your build pipeline as well. So do you guys want to play around with the service worker and trying to see if you can serve different images to different connection types? Or there are a lot of other, so this is, there's a lot of tools to generate these low quality version of your images. So this one is really cool, but the thing is it's like really compute intensive. So you can't, if you are pre-generating your placeholders, it's probably fine. So this placeholders are completely, so this is a tool called primitive. And the placeholders are completely SVG based. So this is how it sort of generates the low quality version of the Mona Lisa here. It's pretty cool, but the thing is like super compute expensive. So you can't generate this on demand like what I showed you right now. And this is just an SVG, this is just an SVG. So it's going to be super small compared to the actual image here. It's a lot of fancy stuff that you can do here. So this is like, so you can see how just using some primitive shapes, they're able to swap this out. So again, this is just a node module. You can just pass along images that you like and see how it works out. I'm not going to show it right now, but can play around with it. It takes like 15, 30 seconds or something. So you can, the point is like, you can definitely not do this in line, in line as in like on demand. You can't wake a user of 30 seconds here with the whole point of generating the LKIPs last thing. So let me talk about another thing that you can possibly use. So we use this internally as well, if present to help with the optimization of your images. Does this make sense to any of you guys? Like, do you know what's exactly happening here? Or do you roughly know what's doing and does someone want to try explaining what this does? Sorry, yeah, break points, yeah. And so you take, okay. So yeah, so this is basically implementing responsive images and it's probably artificially over engineered, I guess, but it just wants to show you how complicated it can get. I'll just roughly go through what it means. So you can use it internally as well, but I'll show you a better way further down the line. But again, it's chrome only, so you probably need to fall back to this for your other browsers. So yeah, trade-offs. Okay, so this is a picture tag. So picture tag is used for a lot of different things. One, you don't, okay, so if this line wasn't there, you don't need, if you don't want to dynamically switch formats, you don't need a picture tag. You can just do this with a normal image tag and SRC set. You need picture tag if you say want a different image, completely different image to be shown on your mobile devices versus desktop. So that's what these two do. So this is showing a different image. Forget all this stuff. So just look at the source and the media query or the sizes query. So this is like, if the media query fits min width is 50 AM and the browser supports WebP, look at this, that's what you're saying. And for, if it is just the media query fits this, then look at this set of images. So basically what you're telling is like, to the browser is like, these are all the images that I have, you can call me and what image to call the browser will decide. So basically telling the browser, I have a 200 wide image. I have 800 wide image. I have a 1000 wide, 1600 wide WebP image. So this telling the browser, hey, this is all I have. You tell me, you decide what to call. Sizes basically tells that, okay, if the min width is obviously goes from top to bottom and so let's take the top one. So if the min width is 50 AM for all devices which have a viewport wider than 50, not viewport, okay. So let's forget, sorry, scratch that. So let's look at sizes first. Sizes first basically says that how much percentage of the viewport is going to be occupied by the image. So 50 VW means is that for bigger displays it's only going to occupy 50% of your viewport. So then you can, the browser can knows what your viewport width is. Browser node was your screen resolution. So it can find out say, okay, the viewport is 1000 pixels wide, DPR or device pixel ratio is two. So the actual viewport is 2000 pixels wide. It's going to occupy only 50% of it. So it's going to be 1000. And then it's going to pick the version of the image that's closest to 1000 pixels wide. And the same thing they are doing it for other image format. So this was just for WebP which is only supported in fewer browsers. Firefox and Edge also recently started supporting. So this article is a pretty bit outdated. JPEG XR, again they do the same thing. JPEG for everyone else and for browsers which don't support picture tag itself they have a fallback, right? So unless you want to do this for every single image on your website there is a, which is so messy. But yeah, this is a new spec that Chrome is pioneering called clientense. So what this does is basically pushes all this complexity to the server side. So the browser just tells the server that hey, to enable this, you need to add this meta tag onto your page. And what this does is that it tells the DPR viewport width and width, it adds these as headers to the server. So the server can then, based on this information generate the right asset and send it to the client. So again at Dexsecure what we do is like we take all these images, we ping the origin server, get the actual image. Based on this information we generate the right size, optimize it and put it on a CDN. So this can implement something similar on your site as well. But the thing is like once that's implemented your front end developers are going to have a easier time because they just need to do this. Like this can potentially do the same thing as the code over here, if your server is intelligent enough. Again, you're just pushing the complexity back to the server, but there are tools which like us which can help you with stuff like this. So yeah, this is basically what it does. It sends your DPR viewport with this how big your device is. Width is, say suppose you're only going to render the image on 50% of the viewport width, right? So it sends the width to the server. So it's like, okay, I got all this information, let me send back an image which is precise to that. So I implemented a basic version of this on client ends as well. So I think it works, let's check. So you can just run the server, I'll just show you. Walk through the code, you can try it at home if you want. Okay, so here I'm just starting the server, creating the server and starting the server but the main logic is here where I serve the file if it's an HTML file, but if it's a JPEG file, I want to do something else. I want to serve the image which is tailored to that particular device. So Sharp is a module which helps with image resizing. You can even use image magic. Sharp was just easier in this case. I ended up using this. So what I do is like, if there is a width parameter that's sent from the user because clients' ends are enabled, it will help you resize the image to that particular thing. And yeah, so that's basically, there are a lot of, again, this code is not production ready, I can probably talk about some of the challenges you might face when you are deploying this to production but this is the general concept, right? You've taken an image then based on whatever parameter size is the width or the viewport width, you can resize it and send the ideally optimized image to the user. So you can also check if the image guy supports where P converted to that, if the guy is coming from a high density screen, support that. So all these things you can implement on your server. There are a few gotchas with this, so let me, so this, again, is only in Chrome, so if you're core Chrome-like browsers, so if you're using like Opera or Brave or any of the Chrome-like browsers, it should work for that as well. It's, it works on local host, so if you're on, wait, is this the, yeah. You need, this works only on sites with HTTPS, so if your main site is on, not on HTTPS because these are potentially fingerprintable, attributes, so it only is sent on HTTPS. Local host is fine, so if you're running it on local host, it, there's an exception for it. Works only on Chrome for now, or Chrome-like browsers. It is not sent for cross origin domains on desktop, so they have again, so Chrome implemented it for all their platforms. There's a huge backlash from other companies like Apple, who said this was like a huge privacy violation, you're not supposed to be doing this. If you want to listen, watch some of the drama that's been going on, you follow this GitHub issue. People basically arguing what is a third party, like what constitutes a third party, so you'll see a lot of philosophical questions in this GitHub thread. But anyway, if you just want to play around with it locally, you can try it out, so I'll do it on my side, so you want to do it locally, you can try it out, follow along as well. I think you just need to run node server.js, that's it. So here, yeah, so this is the markup that I'm using for the client side, so I'm just saying, this is the image, it's going to be rendered on half the screen, and just give me a, so the actual image, if you look at it, so the, how big is this? So yeah, the actual image is like 4,000 pixels wide, obviously exaggerated just to show you what can be done with stuff like this, but yeah, if one, you can see, request, the headers are supposed to be shown here, let me check. Yeah, so with all image requests, so these are the new headers that are getting sent to the server, so you can see the TPR is one, my viewport width is apparently 1920, so since I put this 50 pixels wide here, half the viewport width, it automatically calculates 960, which is half of 1920, and yeah, if my TPR was two, for example, it'll multiply 960 into two, and it'll send that, so that's why when the server resizes it, it resizes to that particular thing. Let's see what happens if I emulate mobile over here, so yeah, mobile is getting a different pixel ratio, so the reason it's, so it actually is getting a wider image, because if you look at this, the TPR is 2.625, so my mobile emulate, so obviously it's just mobile emulation, it's not accurate, but Chrome is sending a TPR value of 2.625 to the server, and so because of that, the width is 127, so my server is resizing to the closest value close to that, and sending it back to the image browser. So you can get all these, you don't need to worry about breakpoints, you don't need to worry about, I mean as a front-end developer, you don't need to worry about it, this still needs to be handled, but probably by a back-end person on your team. So again, or you can just use, like if you are using Dexsecure again, you just have the meta tag, and you just upload, change the URLs to us, we'll do this processing for you, we have the breakpoints, some breakpoints and some heuristics on our side, and we are able to do this using Clantons. So you can try it out with, unfortunately, one of the things to do is, like you can actually try it out on your phone, like connect to it, inspect, you can for, again only for Android devices, Chrome does, Chrome on iOS also doesn't support this. So if you have an Android device, you can connect it to your computer, inspect, so you can go to Chrome inspect, and I think you need to enable a few options on your mobile device to allow remote debugging, but once you do this, you are able to, you'll see your phone over here. I have the, just go to localhost 8080 on that, and it should show you this particular page, and then you can see what DPR is being sent to your server. So I think I print out, so I print out what I'm getting from the client, so I print out the width, I can print out the viewport width. So just implementing this as a Node.js server, you can use your framework or language of choice. Yeah, so the code is pretty simple though. Things that you need to look out for is like, of course you wouldn't want to do this for each particular device, right? So there are going to be thousands of viewports widths, thousands of widths, so if you do this for every particular size, and it's not going to be very efficient, because you're going to probably cache this image also on a CDN, right? And every time it's going to be a cache mess from the CDN, it's not going to be efficient. So on this, on your server side also, you probably will be implementing some sort of breakpoints, logic for that, to say that if it's up to 200 pixels wide, resize it to 200, up to 400, resize it to that. So stuff like that you can play around with and see how it goes. Yeah, check out this article as well. It's, it lets you, so this basically, it's a, this entire thing is a React component. Someone has built this where you can change the numbers over here and see how it actually affects your image. So it's just cool to play around with. And it also explains, I didn't have time to go through exactly how a JPEG works if you're interested in just let me know. I can go through this, but just goes through the different things. We talked about chroma subsampling. We didn't talk about these two, but if you're interested, yeah, just let me know. I can go through how this works behind the scenes. But yeah, I just thought this part of it was especially cool that just change the data and see how it actually ends up rendering the image differently. And yeah, they also talk about chroma subsampling which hopefully, you know, at least somewhat what it does. And this is DCT, discrete cosine transform. And finally, half-man encoding. What else did I want to show? Also wrote an article about image decoding. Again, I can go into this if you're interested. The main thing is that image decoding blocks the main thread in most browsers. So main thread, there's also a separate thread called the rasterization thread, which basically is a set, some browsers have it, do it on the main thread, some browsers do it on a separate thread. What this means is that it generates the final raster image for the GPU or your browser to render, like after laying out everything, after generating the different layers, after compositing it, which generates one image, and then it converts it into pixels, which is then sent to the browser to render. So if your image decode takes too long, so for example, here it's happening on the rasterization thread, your frame rate can drop, it can lead to jankiness. So this is one thing that you need to be aware of. You can read the article for, I go a bit more in-depth into how to measure it. You can measure it on web page test, you can measure it locally on your Chrome DevTools, also on Chrome tracing. So this is what I meant. So for example, you can see the frame rate drops completely here because it's just stuck decoding and till that decode is done, nothing can happen. So to solve this, there's a new attribute called decoding, which these are hints to the browser. So you can say decoding async as in like, hey, this will completely happen on a separate thread, which means that it doesn't block your main thread or your raster thread. Again, it's just a hint to the browser, the browser might still end up doing something else if it knows something better. So this is just telling the browser that, hey, I don't care if it renders asynchronously, not renders, decodes asynchronously, I'm fine with it. Default is auto, sync is like telling the browser, hey, if possible, download the synchronously, decode the synchronously. Again, this is not download, this is just a decode. I've set up different pages, you can demo pages that you can try. Just go to dexsecure.com slash blog for this one. So this is what I was explaining before. So when you set the decode attribute, there's a separate thread, at least in Chrome. So Chrome exposes all this information. This also happens in Safari, just that you're not able to see it in the DevTools, but happens behind the scenes in Safari. I'm not sure about Firefox yet. So there's a separate thread that they call thread pool foreground worker, where you can see every 16 milliseconds, the frame is being painted without stopping for the MHD code. So that's the advantage of using async over there. So you might be thinking like, why don't browsers just do this async all the time? Like, why do developers need to say, like, hey, do this asynchronously, right? There are some cases where you might want to do a synchronous decode. So in this case, if you actually throttle it and properly and stuff, you will actually see the image flicker before the high quality version loads. So let me see if you can replicate that. So once the image is loaded, the higher quality version image, it's still too fast, I guess. So if you actually try this out on a mobile phone, you will see a flicker where I'll show you the code of what's going on here. It's pretty simple. So this one, I'm just incrementing random numbers here just to show you the frame rate. So this is how you record a performance timeline in Chrome. So you just click record, do a reload, wait for stuff, and then stop. So it gives you quite a lot of information and you can see what's happening and different threads. So this is the raster thread I was talking about. This shows you the frame rate. So each frame should has at most 16 milliseconds to render on the screen. So if you want your site to go 60 frames per second, 16 milliseconds is the higher threshold because there's some stuff in each frame that the browser also needs to do, like some housekeeping stuff, like correct memory, and paint on the screen, and all that stuff. So your JavaScript probably has only around 10 milliseconds for each frame to paint. And if you do stuff like image decoding, which can take more than 10 milliseconds in a frame, you will see the jankiness aspect of it. So here I'm using async, I think, yeah. So which is why you can see that, so here it takes 23 milliseconds. So even if this had happened on the main raster thread, this one frame would have been lost. Again, this is just one image, but imagine you have website has shopping cart or something like that with 30, 40 images, and each image takes 20 milliseconds to decode, and a lot of frames will start missing, which will actually cause the jankiness on your website. So you can actually run it on a slower device or on your mobile phone. You will actually see this thing's topic, like whenever it's painting or whenever stuff happens on the main thread, when it switches it out and stuff like that. So there is another API which helps with this. So to prevent the flicker, what happens right now is that usually on image on load, you will, I'll show you the code again. So this is basically what you'd probably do, right? So you will set it to lower quality image, like I'm using the LQIP here, and once that's load, I'm just saying on window load or something like that. So what you try to do is like, you'll try to set it to the high quality version, and you will, after the image is loaded, you would want to switch the low quality version with the high quality version. So that's what you would do, but the thing is like when this on load triggers, it means the image is downloaded, but not decoded, which is the problem. So when the bigger image has downloaded, the browser is ready to decode it, but before it can decode it, you're basically replacing it with the higher quality version of the image, which is why you see that flicker, that white screen that you see in between, if you actually run it on a slower device, is because the browser is blocked on decode. Once the decode is done, it's able to paint it on the screen. So to solve this, there's a new API called decode, which basically does this. So instead of attaching it on on load, you can say dot decode, which returns a promise. And once the promise is done, then you can go ahead and replace it. So with this thing, hopefully you shouldn't see the, I mean, you couldn't see that clearly before as well, but this one is this example one. So here it gets replaced only after the decode is also completely done. So you don't see that flash anymore. So that's another thing that you can use. Other stuff, lazy loading images, if possible try to lazy load. Now there's an actual attribute called lazy, which is behind a flag in Chrome. Try it out. So you don't need any JavaScript to actually lazy load your images. So I have an article on that as well. So this goes, the first one goes into different techniques that you can use to lazy load your images. Nothing too fancy here, so I'm not going through it. This one is new, just released April, where you just need to do this. Yeah, you just need to say loading equal to lazy. And it's coming natively to browsers. Hopefully other browsers support this as well. And yeah, otherwise if you are to support other browsers you can use something called intersection observer. Have you heard of that before, intersection observer? So intersection observer is basically, okay, so to know if a particular element is in a particular, so you define a area of region that you are interested in. So you can say that if for lazy loading images you'll define it as the entire window. And you can say if any element is inside this window, notify me. So if an image gets crawled into the viewport, then you'll get a callback. So inside that callback you can actually load the image because you know that the image is about to be shown to the user, let me load the image. So that's a much more performant way of lazy loading than I think you need to check your offset width or something like that. I'm not sure about that, but usually what libraries do is that they, to see if they're on screen or not, they check their offset from the top of the page to the top of the browser screen and do some math with it and see if you're actually on screen. So thing is every time you access offset in JavaScript it forces a paint on the screen which is expensive. You are basically telling, hey, lay it out immediately and tell me if this thing is within the screen or not, which is expensive and people, and this needs to run almost every second, right? You need the user's creep scrolling, so you need to do this force layout every time, which is bad for user experience. So that's why Intersection Observer is a cross browser. I think pretty much everyone supports it. Yeah, yeah, this is as good as you can get, right? So I is not going to get it, Opera Mini is not going to get it, but so you can start using these as for checking if your images have come into the viewport and you can give a buffer, you can say that, hey, if it's 20 pixels away from the viewport, just notify me, lazy load them. And so that's a bit more performant way of lazy loading images. And hopefully other browsers implement this loading lazy because you don't need, it's pretty much a lot of the big companies and it's necessary for the web I feel and if it's native within the platform you don't need to download a separate library to implement this stuff. And it's guaranteed by the browser, right? So it just works. Other thing is that this is, yeah, as you can see a demo here. So this is what I mentioned, right? So when I was talking about how important it is to optimize your exit data. So usually the starting of the JPEG images at least contain the dimensions of the height and width of the image. So this, so Chrome, what it does as a heuristic is that it just pings the first two KB if your server supports range requests. It says, give me the first two KB of your image file. Hopefully the size of width and height of the image is within that. So once the browser has this information it can lay out the page in a better way. You don't see that reflow, right? Like when something, you're about to click on something, some other image loaded and like everything in the page shift it's a very bad experience. So that's why browsers try to lay out the page as soon as possible. And if you are using lazy loading natively in browser this one heuristic that Chrome does where it tries to ping the first two KB hopefully your exit data and stuff is properly optimized. So your actual dimensions are there within that first two KB and it's able to lay out that page. So you can see that it's, all these haven't loaded yet but it's just downloading the first two KB of it. It's not downloading the entire image. Now when you scroll then the entire images will be loaded. So you can see the full images being loaded here. And if it's implemented, right? Like the browser has way more information than what you have in usually in JavaScript, right? Like browser is able to know that, okay, if the user is on a slow connection probably I need to start lazy loading images which are way down as well because the user starts scrolling and then you start loading the images the bad user experience. So building it natively, the browser can, I don't know if Chrome does this now but the heuristics can be better fine-tuned if you are at the browser level. So I think this is a good thing overall. Yeah, so this is just artificial numbers but yeah, good idea. I can talk a bit about like how we built our image optimization pipeline. We optimize a lot of different things not just images but the pipeline is sort of the same. So these are some of the image optimization stuff that we do. Talked about formats, quality, don't use a specific quality for all your images. Like some images look very good at quality 30 but some images look very bad even if you are at quality 95. So the quality number is basically, it doesn't mean anything, it doesn't correspond to how a human sees two images. So that's why you need to use metrics like PSNR or DSSIM. So this is the one that probably don't have time to cover it fully but it's called structural similarity index. So what this means is that it actually doesn't do a pixel comparison, right? Like two images can have wildly different pixels but when you look at it can look exactly the same. So this looks at the actual structure of the image to see like how different two images are and using this we are able to figure out if I optimize to quality 60, the structure of the image doesn't change much. So when a human sees it it's not going to look that different so I can choose 60. But if I, for another image I might find like for quality 95 the structure of the image changes a lot. Even when I do a very mild optimization to it so let me not do aggressive optimization with it. Yeah, we also talked briefly about different networks, progressive JPEGs we talked about, metadata, SVGs there are a lot of stuff that you can do to optimize SVGs. Dynamic resizing, yeah, if you have client-ins you do fall back to that or we also fall back to the type of device if we don't have client-ins. Different optimization modes except data can also look at optimizing on network speeds, GIFs, yeah please don't use GIFs. I didn't, yeah I can probably touch upon that so what Safari has done is that you can basically put a video in an image tag now. So you don't need to have a separate video tag, you don't need to, let me see if you can demo that. So if you have Safari you can play around with this. It's a very simple HTML file, it's just points but the main thing is like yeah I'm loading a video in an image tag. So this is just some random video but you can see that it's an image tag and it's loading. So the main thing is like Safari also does some stuff like so videos behave differently than GIFs. So Safari sees that if you're loading a video in an image tag it automatically loops the video, it automatically mutes the video because GIFs don't have sound, it automatically starts playing the video, no user interaction is required. So this works only in Safari but the other browsers have decided not to implement this for a lot of reasons. So again you can see the video's finished but it's looping again like what a GIF would. So they're trying to mirror what a GIF would do by default in this thing. The other browsers have decided not to do this because one main reason is that videos have a different, so browsers don't preload videos. So there's something called a lookahead scanner which the browser gets, starts even before the browser properly passes your DOM into different elements. It just sees like this looks like a URL, I'm going to start loading it so that when the browser actually constructs the DOM it knows that okay I need this URL, I'm going to start using it. Browsers don't do that for video because you may not play that video. They don't want to waste bandwidth downloading videos that you don't watch. So it doesn't play well with that prelookahead scanners but if it's just a standard image the browsers will start loading it even before the DOM is constructed and the actual network records goes out. So that's one trade-off you need to think about. The second one is that yeah basically other browsers there are other newer image formats which are coming out like every one which are basically frames of a video which if you just look at an individual frame of a video it's an image right and the same compression techniques that have been used for video are being exposed as formats that browsers can use. So Apple already has this HEIC, HEIV once which it's promoting so any photo that you take on your iPhone by default is going to be that but there is I think that's has some patent issues or something like that but anyway the industry at large is moving to this new format called AV1 which would not suffer from issues like the one Jeff was having right. So people feel that it's a hack to send a video to an image so the other browsers are not going to implement that but yeah again the thing is what we do is like we see if the guy is coming from Safari we automatically send a video if the guy is coming from Chrome we send animated WebP if the guy is coming from Firefox he send animated PNG and if the guy is coming from an ancient browser like IE10 or something then we send a GIF so these are things that we can automatically do on our site but I've given you sort of the building blocks of how each of these things work so that you can implement stuff that's interesting to you on your site and choose different encoders now you can actually look at different encoders hopefully understand some of the terms that are there and to see what's going on and evaluate it and see what's best for your company. I think this workshop ended up with a lot more talking than I expected but I realized that yeah this stuff just requires a lot of theory behind it I just scratched the surface but hopefully you can go home and play around with a lot more stuff now that you have the basics of what it takes to optimize images on your site. I think that's about it if you want me to touch up on anything else just let me know otherwise yeah thanks yeah the slides and stuff like that so it's mostly on GitHub there are a few bugs in the code that I just found so I'll just update it as we go on I've got a question comparatively right between Progressive and JPEG and let's say loading that artistic one kilobyte or allows you one kilobyte first and then loading the full image later and which would be the better option so Progressive and JPEG is like yeah and the solution is like this works perfectly right that's a good question so there are different things that you can do so you can directly use LQIP you can use a Progressive, JPEG and or yeah there are other image formats like WebP and stuff like that oh before I forget so just a second so there are these other model formats like WebP, JPEG, XR and stuff like that the reason I talked about decoding time and how to measure that is that just because an image is smaller doesn't mean it's better so for example one big company Trivago you might have heard of them what they found out was that JPEG, XR they were sending it to their IE users and they found out even though it was smaller they took much longer to decode that's because JPEG has been around for a century I mean 20 years right so there is hardware accelerated the GPU can accelerate it it can decode much faster but these newer ones like WebP and JPEG, XR like even WebP right like it's smaller but it's not as the decoding is not as fast as JPEG so look at that also before you choose which format to send to users with respect to LQIP and progressive JPEG the main difference that I see is that progressive JPEG it's going to take much more CPU time to decode but again it's going to be negligible but thing is with LQIP you sort of need some JavaScript also to do the switching around and stuff like that progressive JPEG that you don't need to change your code at all you just need to change the way you encode images so that might be lesser intrusive to start off with and but with respect to which shows so with progressive JPEG also there's a lot you can do what I showed you was the default behavior you get with most JPEG where you see the black and white but you can even control that loading so you can send a very very very low version image almost like an LQIP type of stuff with it's called custom scan scripts just Google for this later on so with progressive JPEGs you can mimic almost LQIP like behavior with custom scan scripts so you can have a very very low quality version of a placeholder first and then encode the image with that so you sort of get the best of both worlds where you don't need to change your HTML you don't need to add JavaScript to switch over and stuff like that you just get an it's an image it works everywhere and you also get a placeholder sort of thing much faster JPEG versus so most JPEG is an encoder to generate JPEG files live JPEG ah no like no like unless you don't want to use progressive JPEG so most JPEG only generates progressive JPEGs if in some case you want to not use a progressive JPEG because of some trade-off that I mentioned like then you probably need to use live JPEG but for any other case most JPEG is definitely way better so I mentioned I've also talked about this I've just heard of a YouTube video where we mentioned how we built this is more DevOps than like image optimization or anything to do with JavaScript but if you're interested in I can go through this as well just shows you how we built the image optimization pipeline saying image but yeah basically JavaScript, CSS, fonts, SVDs everything uses the same pipeline that you have the basic idea is to load so you notice like some of these things actually are compute intensive right so you can't we want to be on demand we don't want users to upload assets to us we can be on your S3 bucket or on your own server but what we want to do is like anytime we want to generate a new version or if it's not in our cache we ping the original server and we directly stream the unoptimized unmodified image to the client so this is and we set a very low cache time on the CDN and in the background either with lambda or spot instances we actually do all the heavy lifting stuff optimize it and then update the CDN with optimized asset so that's roughly how we sort of get past problem of you can if you are able to pre-process everything go ahead but the thing is like you can see how even one image you can generate as many as like 20 versions of the image depending on format, quality, resolutions, progressive or not, sub-sampling or not all these different things right so the advantage of doing it on demand is that yeah you are able to generate only what you need but you have to sort of look at the latency of stuff because like you don't want the first user to keep waiting because you don't have that set yet so we sort of got past that by streaming the original request response first and next time someone comes from the same device or browser we can, we'll have the optimization complete and we'll pipe that back yeah if you're doing it on demand you need to cater for traffic spikes and stuff like that so if request, website has like 50, 60 asset request which is very normal then you will instantly get 60 requests to our system and even if you launch like 100 new pages on your site that can lead to thousands of new requests in just one second right so yeah you need to figure out how to handle that there are a lot of messy stuff when you do stuff like this like you might find improperly formatted pictures you might find PNGs which says they are a JPEG you might find HTML files which say they are a JPEG you might have, the thing is after you do all this you probably want to put it on a CDN so that is interesting as well how do you update the CDN with so the thing with us is that the URL stays the same so if you have something like star.dexsecure.com.cat.png so if you use this on a mobile versus a desktop like what I just showed you like on using client-ins and stuff like that you will get different images this was an easy case where I showed you because like it's just directly hitting my server but what happens if you put a CDN in between if the mobile guy comes first the CDN will cache the mobile version first and then a desktop guy comes you want to tell the CDN that hey don't serve the mobile version so usually other tools what they do is like they'll ask you to change the URL by adding W 200 like they'll ask you to give the URLs in the parameters in the URL itself what we want to do is like try to make as much automatic as possible so that's why adding the CDN layer you need to worry about like caching properly and stuff like that which I can if you're interested come talk to me later that's about it I think