 This is Yov, I'll try to keep everything very brief as we get established. So Yov's been doing quite a lot of work with Responsive Images of late, he's heavily involved in the Responsive Images community group. Then we have Peter Miller, so Peter Miller's a developer, works on a lot of heavy, concert heavy websites, image heavy websites, great experience. John Robson writes, also works, works for? Yemmer? Yemmer? Sorry for that. Has written fantastic stuff about web performance, has some great ideas about how we can address some of the Responsive Images problems using progressive JPEGs. Still will. Has everyone heard of the clown car technique? The creator? So we're going to hopefully talk a little bit about that. John Miller works for Google, and John Miller is probably the only person on Earth who can tell you the difference between a device pixel, a real pixel, a CSS pixel, and all sorts of other pixels. So and he can do it all in his head, it's pretty crazy. So first up we have Yov who's going to give a 10 minute talk, basically outlining what the current solutions are that we've been basically discussing for the last two years, trying to come up with a solution for Responsive Images. Take it away. So we'll swap seats. So hi. As Marc has said, I'm Yov Vice, I'm here to talk about Responsive Images, I'll try to sum up two years of discussions into 10 minute talks, so bear with me. So first in the mid-2000, all we had was mobile-only sites. There were kind of lame with the very slim content, highly optimized images. It got a little better with the iPhone, but it wasn't that hot. Then Responsive Web Design became a thing, which was very cool, one code base to rule them all. We can serve all the devices through a single site, but the problem is it was kind of slow. It became a synonym to slow mobile website, which is a problem. So the reason is that most sites deserve the same resources to both mobile and desktop. And most of that data is images. So there are a lot of savings to be made. I wrote a utility which Tim Cadillac run, Tim Cadillac is a developer, sorry for name dropping. But basically we saw that up to 72% of the image data can be saved for some of the viewports in some of the cases, so there's a lot of savings to be made. And Retina only makes things worse because the gap between the smallest viewport, smallest images you want to send and the highest resolution images you want to send is getting bigger. And for with most devs owning Retina devices, most devs are sending high resolution images to all devices. So this is the Responsive Images problem which we like to divide into two major use cases. The first one is resolution switching, serving different images to different devices. Different dimension images, different devices. The images are the same images, same proportions, they're not cropped. But basically the quality is different. This is one example of that. It can be further divided into DPR switching, serving Retina images only to Retina devices. And viewport switching, which is adapting the image dimensions to the actual display to the size in which they will be displayed. And the other major use case is R direction. Basically it's content optimization without wasting too many bytes. It's matching the images to the layout in a way that makes sense. So either a crop or different proportions or something that works according to the actual responsive breakpoints. A survey we ran, a lot of developers are already doing that using hacks. So this is a major use case. And so we've talked about the problem, let's talk about the solutions. There are several proposed standard solutions. There's the source and attribute, picture element, client, hints, header. I put a question mark in because it's not really a standard solution. It's just a proposal at the moment, but the responsive image container. I'll talk about each one in detail. So source set, basically it's the same old image tag, now with new attribute that can include multiple resources according to the DPR and the viewport. It's a slightly controversial statement, but it addresses mainly the resolution switching case and much less the art direction case. Some people disagree, but the current implementation, source set is currently implemented in WebKit and in Blank. It's behind the flag in Blank and it's not yet shipped in WebKit, but it's there in the codebase. Firefox will soon follow. And basically it looks something like this. You specify the 1x or 2x or 1.5x or 3x qualifiers for each image resource. You add to the page. The entire spec also includes viewport switching, which looks something like this. For each resource, you specify the max viewport for which it can be applied to and the x-factor that's adapted to it. The problem with that, it gives you a lot of expressive power, but you also have to, in some cases, like you can see in the example I put up, you have to define a single URL several times because it fits several DPR and viewport combinations. Then we have picture, which is mainly targeted at art direction. It specifies, it has an element with multiple source children. Each one of them is specifying an image resource based on media queries and possibly type. The first matching resource is downloaded and displayed. It looks something like this. And as you can see, it can mix source set into the source elements. So you can define an art directed image with multiple DPR versions of it. And the media attributes you use here are most probably the same media values that you use for your layout breakpoints, unlike viewport resolution switching, which can be independent of the layout viewport. Then we have, as a third contender or third proposal, we have client hints that is, unlike the two others, it's not a market-based solution. It's an HTTP-based solution. And basically, the client sends out its capabilities. It sends out hints to the server saying, this is my DPR. This is my viewport width or height or the actual values are still debated, but this is the general spirit. And everything is done on the server side. It's server logic that serves one resource or the other. And one recent change to that proposal is that it's opt-in only. The hints are not sent on the first HTML request. On the other hand, it saves us from adding data to requests where the server is not going to do anything about it. So that's a recent compromise that may be able to push the spec forward. Then I have something I'm proposing as more of a long-term solution. This is not something that's on anyone's immediate radar, but it's a long-term solution I'm proposing. That's a file format-based approach where basically each resolution, each target resolution that we want to serve is represented in a layer in some sort of a container. And these layers are building up one on top of the other so that the browser can download a certain number of layers and then add more layers on that enhancing the quality of the image. It can address both resolution switching and art direction. And I'll just show you a bunch of examples. So basically for resolution switching, we have this photo that if we look into the layers that compose it, it's a thumbnail, then an enhancement layer, which is basically the thumbnail upscaled with a diff from the original image downscaled. So an enhancement layer and another one that's basically used to recreate the original image without adding many bytes to the process. There is no, the overhead is very small. And for art direction, the same can be applied. So this bigger image that's used everywhere for art direction can be split into a crop, then an enhancement layer and another one. The advantages are that markup is not touched. You have a single file per image, so it's easy to maintain. And the best one for me is that the browser can just download diffs if they had downloaded one image and then something in the browser's environment have changed. You can download the diff. The disadvantages are that it's complicated to implement and basically the coding performance and network performance without HTTP2 is currently a mystery. We need to further investigate that in order to know if it's feasible or not without HTTP2 and in terms of the coding performance. And last minute slides added by John Miller here, maybe you want to talk about that? So a possible way of making this responsive image form a container load more efficiently when you're making several range requests or something and you don't leave gaps between, wait for round trip each time. We can load all the images on the page in parallel, progressively. So I've taken an object website, I've stolen this. On the left you see the images being loaded one by one sequentially. On the right you see the images being loaded all in parallel. It's the same progressive JPEGs on both sides, just on the left I'm truncating the stream of images so that you get the first image and then I truncate at some point there. On the right I truncate all the images at the same percentage. So this is 5% of the image bytes and you can see that on both sides pages look great. But by the time you've got to 10% of the image bytes, already on the right you can kind of see what the page looks like. All the images are really blurry and so on, but they at least sort of fill their space well. Whereas on the left you can see sort of the top left, you can see the start of the top left image, but all the other images haven't even started loading yet. As you gradually load more, say you're on... So you get the 25% for example. Now that the page on the right, we've only loaded a quarter of the image bytes, but it already starts to look okay. The images aren't crisp, but they're still perfectly usable. Whereas on the left we have this super crisp lovely image on the top left, but the other images haven't even started loading. By the time you get to sort of 50%, the page on the right now looks perfect. You can almost not tell that it hasn't finished loading the images, whereas on the left again we've only got half the images. And then as you load more bytes, the page on the right gradually becomes like a super crisp retina beautiful page, but it's a very subtle difference between that and the 1x image. Whereas on the left, only now at 100% do we actually have all the images at all visible. This is great. I love that you're giving something to the users while they're waiting. That's brilliant. That's what I love about progressive JPEGs is that it will download the first scan as soon as possible. So I think that that's kind of a good lead-in to the first question that we have, which is from Jake Atubol. Hope I pronounced that right. Jake, do you have your question with you? Yes. Yes. Mike Runner, please. There goes Pete. He's going to be really slim by the end of the day. Okay. So I currently send a 2x or 2.5x image and just compress the hell out of it. And the file size is kind of roughly the same as a 1x image. And that seems to do the job. Why do we need all these extra markup examples? And if in the future, John's solution can come in and stop the download at some point, surely that's all we need. So, good question. Shall we start with the difference between fixed-width and flexible-width images? So this ties into what Yav was saying earlier about resolution switching. There's two kinds. There's DPR switching, where your images are fixed size, like a logo or an icon. It's going to be the same width, say 32 pixels on all devices. And all you need to do is switch it out based on the device's pick screen density. So you might need a 2x image on a retina screen, a 3x image on, like, Samsung Galaxy S4, that kind of thing. But all you're doing is changing it based on the device pixel ratio. With flexible-width image, say you've got width 100%, then suddenly the width of the viewport of the device matters. And so your phones, your tablets, your laptops, the all of different widths, they only different images. And here, a simple technique which gets you up to double the resolution of an image by compressing it more heavily isn't going to scale to sort of an 8x bigger image. Cool. So, what do you think? One more thing. Regarding the compressive images. Explain what the compressive images are. What Jake was talking about. Basically, taking the high res images and compressing them, extremely compressing them so that they will be one downsized, two 1x display dimensions, they will still look good. And also for retina, they look quite fine because the actual display dimensions are smaller. I'm seeing a few people looking at you. So, let's see if we can explain that a little better. Does anyone have a go at explaining it a little better? I mean, I can try. So, somebody discovered that if they keep the resolution really high in an image, but they put the compression down to, I think, zero, is that correct? That's correct. So, they'll actually get a smaller file size that is higher resolution and it looks good on retina images and you have a smaller file size for speed. So, it's actually the best of both worlds and it's kind of like an amazing discovery. Yeah. So, we're just going to... I'm sure if we go back to 1996, we might find the same thing. But who knows? Because we were doing it for modems and stuff anyway. So, I guess what I'm wondering is, what might be some of the side effects? Because we're sending four times the data, right? Basically, sending images two times or even more bigger this size. Yeah. It might be like three times. Yeah. So, can the browser really handle that? Like, do we know if there are side effects within the browser? Yeah, good question. Maybe John's in a good position because he's closer to Chrome. We don't really have to research yet. So, yeah, it's going to take four times memory. On an image-heavy page that might blow your memory budget and suddenly things like painting and scrolling might get slower. It's hard to know. We need more research, basically. Right. Yeah, we need to experiment with that. But it's a super interesting idea. Yeah. Basically, it's on the agenda to get some real hard data on the implications of that. But... On the agenda of who? On the responsive image because images community groups agenda. To get some hard data on that for the DPR switching case. But, exactly as John said, it doesn't cover all the cases. And it can cause decoding performance and memory issues. So, if you're using that, you need to test it well to make sure there are no problems. Because we don't have general data. So, I'm going to bounce a question to Peter Miller because he's kind of like... He's working on a lot of... Like, you work on some fashion websites and you've done like... For whichever ones, like... Obviously, you'd have access to the very high resolution pictures that are coming straight from photographers. What would this mean for maybe you guys? Would it mean anything? Well, I think that... Well, first of all, I think I've looked at the solution and I've got a retina MacBook. And I think it's still a matter of subjective opinion, maybe, about whether this does look as good as real retina images or 2x images. But I think possibly thinking about it even today is the bigger problem that, okay, we can send these kind of pseudo 2x images down to be shrunk down at 50%. But what happens when we maybe want to expand those images or have them be more of a percentage width? And then suddenly, we're actually showing those highly compressed 2x images at their 1x size, at their physical 1x size, and then maybe they look even worse. Okay. I think whenever we have this debate, we also still have to remember people who have metered bandwidth, because, yeah, on your MacBook Pro, on your Wi-Fi, it's going to be great, but someone doesn't want to download that when they're paying $19 for... Yeah, absolutely. But let's hold on that one because someone takes us off to the following questions. So I want to not go into, like, those exact concerns because what Anne originally said was, like, you can make a larger image that's actually smaller in kilobytes than... So in a sense, it doesn't really apply, but it does, but we're getting to exactly to the point where it makes... We also have to think about Android 2.4 and older androids which are still being sold today because they don't have the memory capacity. Right. Even Firefox OS is, like, 250 megabytes of RAM on that, and it's, like, you know, eight struggles, you know? So absolutely. So anyone want to... I think we should probably move on to the next topic. So the second question. Is there a George Crawford? Hi, this is one of the anonymous questions from Google Moderator. Is DPI negotiation only a stopgap? Bandwidth keeps growing exponentially. LCD prices are dropping. GPUs benefit from Moore's Law. In several years, will it make sense to just send high-resolution images to old users? So, I mean... The implications are, you know, it just goes back, you know, if we look at all the computing history, right? You know, well, 64K, BNF, and, you know, Moore's Law, the computers just get faster, and yet we kind of seem to find ourselves in the same situation over and over again. Like, back in 1996, like I was saying before, it's like we had modems, and we had to send compressor crap out of images then as well to users that were just getting cable and who didn't have cable and store on 54K. 54k rate modems, blah, blah. So, thoughts? You know, should we really worry that much? Should we stop caring? I know everybody's fully excited, so... Let's do a round, but... and bounce back if we can, but let's keep it short. I mean, I think that it would be nice if we could just send super-large images, but I don't think we can do that quite yet. And I think that the responsive images, it's such a topic because we're getting smaller and slower and we're getting bigger and faster at the same time, right? So this is spreading out. So, yeah, I have obviously my opinions, but I think that we need a responsive images container could be possibly a progressive JPEG that has really small scans and really large scans and possibly the browser only downloads what it needs. We have a delegate in the queue. I'll just get Mike Petridge ready. If you can jump to Mike. So, Pete, do you want to quickly jump in on this? Well, I think staying in a hotel room in New York City actually feels like we're quite a few years away from being... very fastly connected across everywhere. But also, you know, you've got to think data plans roaming. You want to give the users maybe an option to have a low-resolution mode, but if we're just always sending that high resolution. But also allowing for the art direction case, I don't think that this caters for that. I think that as Anne started to say, I think that Moore's law is not giving us an expanded bandwidth. I mean, we get a larger range of devices. We have capable smaller devices. And the bandwidth coverage is not ubiquitous. So, basically, it's a question that asks us whether we can look into the future. And unfortunately, we can't. But I don't think it may... In 20 years, it may be relevant. I mean, we can't look into the future, but we can see what the trends are, right? We know that we have now Google Glass and Samsung's Watch and things like that. In a sense, we know a little bit where machines are going. And again, we're going to see the same cycle. So, basically, the problem is getting worse. So, we found Mike. Do you want to jump in? Yeah, just a quick question about... So, one of you mentioned what if you have a small image? I mean, using a high compression at a large size that was initially seen small but then expanded. It seems to be a consequence of the actual form, the JPEG format. What about new formats such as WebP and using things like that? So, maybe I'll get John to respond to that one. So, I'm not an expert on WebP, but, yeah, it tends to give... It seems there are two things that are nice. It gives slightly better performance and compression performance, ranging from 25 to 60, or it depends what you're doing with it. But it's also, when very highly compressed, you get less of the sort of blocking artifact you get with JPEG. So, you can actually afford a greater compression ratio than you would use with JPEG. So, that can help, but there are issues with Bell's support and so on, of course. Also, with WebP, perceived performance is, like, slower than progressive JPEG, right? Because you're actually, you know, getting a scan earlier than WebP. Even if a WebP is a smaller file size, progressive JPEG is going to beat it every time. Am I right, too? Somebody can correct me here, but it doesn't support progressive loading. You know, that's a good question. So, I think that's a, at least from Mozilla's perspective, following the bug about WebP, that's a real showstopper for us. Correct. So, just, I'm not sure a future progressive WebP can answer that or a progressive JPEG, but basically, I think that it's not a question of image format because we have progressive image formats or we can easily come up with, like, the responsive image container stuff. We can easily, it's a prototype, but it's not something, it's not complicated to get this done. The problem is that currently there are no fetching mechanism in place that can download only the start of the image for low-resolution devices and download the entire image for high-resolution devices that have the bandwidth and have the capability to decode it. And I think that getting the fetching mechanism in place is, would be an enabler for such, for such optimization, for such formats. So, I want to quickly get still thoughts on all this. So, I was, you know, going back to the original question, I think we're never going to be able to just have one image solution, which is why I think, because you don't want to have the same image if your device is this big versus if your image is this big. For those listening to audio, you don't want to have the same image if you have a 20-inch image versus a half-inch image. But we actually could do that, right? Because with a progressive JPEG, you can have a variable number of scans and the scans are of increasing quality. So you could have, say, 40. Like, let's expand our minds about this. Correct? Like, we could. If you're getting into 40 times bigger zone, you will get color distortion. I'm talking about the art direction. If you're going to... Putting the art direction topic aside for a second, you actually can have, like, a progressive JPEG that has a very small scan and then, you know, the very high resolution. So you can have a tiny image that downloads very quickly and then have a super large HD image for... Yeah, but I think we can't put the art direction aside because we are serving so many different devices. So I think to answer the question... The answer to the question is, I don't think serving one image will be the solution in the end because we are reaching such a... I mean, it's... We're going to kind of continue to... The next question is very much related to this. So I'm going to... Before I do that, Kyle... Kyle, do you have a mic already? Where's Kyle? Sorry, Kyle Simpson. He's over there. So I'm going to queue up Kyle. He's right there. So, Kyle, just a quick question. So a lot of these solutions seem to be sort of art direction-centric, like, I want the best possible images that can be there. But responsive seems to respond to maybe the screen size, maybe to the bandwidth, but it doesn't seem to react to the environment. Say, if I start out loading a Flickr page that the battery powers at, you know, 50%, I've got plenty of processing power, but if I'm now at 2%, maybe the device should start, you know, choosing not to render these higher things. So can't we have solutions that allow apps to respond to more complex situations than just the screen size? So a quick... We're going to cover that as well, but somebody wants to make a quick comment. Just a quick comment. That's something that certainly should be possible with the... when we're talking about resolution switching, when we're talking about things that won't break the layout, but just would give the user a lower quality image when it can't download the high-res quality image. And this is something that should be heuristically possible with Soreset. So the browser in Soreset basically, the spec contains an asterisk saying eventually the browser can do whatever the hell he wants. So the browser can decide, based on user preference, based on environment, based on battery, to not download the high-res image, but the lower-res. So it's kind of a good thing and a bad thing, but it does mean it's a declarative model. You're handing over control. You're saying to the browser, this is what I got. Deal with it. Do it best for the user. Yeah, I don't think this should be something decided by the web developer because I don't think they have this kind of information. We cannot have this kind of information available to the web developer. This is something that should be done by the browser with the user preference. I'm going to be a moderator here. I jump over to the next question. Because they're all related anyway, so it's a glance at the points. Calvin Spielman, your question. So, Runner? Yeah, I was just wondering about if we are fighting a losing battle by continually generating all these different image sizes and different resolutions up front when there's constantly new devices, constantly different sizes. We're always going to be constantly catching up and generating more and more images as opposed to having some service size solution that does it dynamically and optimizes the set of images we have. So, I think this ties in kind of beautifully to the stuff that John was presenting before and also to the progressive JPEG to having those multiple scans. So, John, with the kind of work that you've been doing, what are you seeing, what are the numbers basically telling you there? So, I guess there's two aspects of this question. On the server side, sure, you don't want the artist to be manually saving out, like, 20 different versions of every image. It's not scalable. So, I guess on the server side, you kind of have to be, at the moment, dynamically creating these images by automatically resizing them. But then, I guess I don't like this as well, it would be nice if there was only one image you have to save. Like, you just told your Photoshop or whatever to save one ultra-high resolution image and the pearls download just the beginning of it. Like, however much it needs. Yeah, that's perfect. I think that's the most elegant solution. I think that the second most elegant solution is, like, with client hints and having the server serve up different resolutions, different versions of the image, but it has to be automatic. Like, we should not be creating, you know, X number of images, you know, to serve to different devices. And we are losing, you know, we are, like, fighting a losing battle in that case. I think the quick audience, did you have a question? Yeah. But if you have that situation, aren't you going to end up in the thing that all the developers were complaining about when the operators started to compress your images on mobile networks and they said, just get out of my way. I don't want you to touch my stuff. If you've got some automated system that sits between the thing, you're creating the same problem that three years ago, you're all whinging about. My thought was that this would be, I think it should be server side, so that the developer is actually deciding what you're sending over. Not, the browser should not be altering images, is my thought. But the developer doesn't know the bandwidth of the client? No, I mean, with client, when we're all said and done, when we have client hints and everything else, it should be the server that's making the 20 images and serving the correct image based on the client hints or the source set, not the browser that's taking an image and deciding that the upper left hand corner should be shown instead of the middle of it. It seems a lot easier for the browser to take the user's preference into account though. If the user decides that they're roaming or something and they only want the very low resolution image, then do you want an extra client hint saying, I'm roaming, an extra client hint saying these kind of things? No, the browser can say, you can change, like when you have client hints, I guess, should we ask the question about client hints that we're going to ask later? No, because I think we're, I mean, we are talking about, I don't think we need to move to that question yet, because we are kind of dealing with or discussing a very serious problem, which is, A, how can computers really decide this? How do we set the break points or not even the break points, but, you know, this looks good here, so send this and so on. So having that level of control, which was, as Steve was saying before, it's like that was pissing people off, developers, because it didn't look good. And to some degree, you know, well, that's what we really need to look at. Can we computationally do that? And John's research suggests that it can, because you do get a, at least the user gets the initial layout with the nice images or beginnings of a nice image, and then from there, you start progressively improving it. So how much do you need to push it and what does that mean on the server? It's kind of hard to know. My opinion on this is that basically you need all solutions to be automizable, so you can do either dynamic or a build step or something on the server side that does all the grunt work for you as a developer. But I don't think, at least, we must have some solutions that don't require it. So I got two questions from the floor. I got one from, sorry, your name? Oh, sorry, okay. I thought you, okay, there, go ahead. So I thought you were going to ask a question. No problem, go ahead. So if we start generating 20, 30 different versions of files on the server side and start doing very client-hint headers or whatever, are we going to have problems where the edge caches just can't keep all of these? I mean, we're going to be blowing out 20 different times the number of files on all the edge caches. Are CDNs going to be useful anymore at that point? How's it going to scale? I think edge caches, since the very, very woodwork, since the client hints is one hint per header, so very woodwork, but I think the edge caches will have to adapt to the new reality of much more images than before. But since I don't think this will be, it may be exponential, but there will be time to adapt. I mean, as far as the edge caches go. That's my opinion. Hang on, don't speak without the mic. So I was going to jump to... So I'll go to Matt. I think you had a question. No, you're good. Okay, cool. So, Peter, you're looking like you want to say something. Well, I mean, when the original question was first asked, the case that comes to my mind is still the art direction case, because it is very important for publications that I work on, and the crop has to be right. And that's why we do do upfront generation. We have picture editors, and they are in charge of saying, when this image is displayed at this size, then here's a crop I want. But I've worked with content management systems in the past that will define that given coordinates. So, yes, we can automatically resize images dynamically, but maybe we could allow picture editors to dynamically come in and draw some coordinates for different use cases in different contexts. And the question about whether it's the browser that decides it or it's the developer that decides it, I think it has to be the browser that decides it. It's not just the resolution of the screen. It's not just the size of the screen. It's not just the battery power. It's actually everything else to do with what's rendering that image element, and it's not just the HTML. It's a style sheet as well. So, this leads beautifully to the next question, which is by Jeffrey Selman. Is he here? There he is. So, this is an annoying theoretical purity question supplied by the moderator. And I have a complicated relationship to it because since 1998 I've been beating the drum for separation of presentation and structure, but I'm also a big supporter of Matt Marcus and picture. Hey, buddy. Is it problematic that we describe the presentation of images in markup against our typical mantra to separate presentation from content, and if so, does the specification of a myriad of sizes make this worse? So, we saw this. Hopefully you had the same kind of gut reaction when you saw both source set and picture and went. Like when you saw the code up on the screen, and you kind of went, uh, really? We have to type all those times one and times two, and you saw picture like bloats like all over the place. And it has media queries in it as well. So, this goes straight to, you know, I know Geoffrey didn't ask the question specifically, but, like he said, it's like, it's bad because we are putting media queries into our markup. You know, what can we do there? Is there possible solutions? So, the reason that there's a difference between images and background images is because the image tag IMG is a foreground image. It's content versus all the design that we have on the web page. So, yes, we do have to keep it, the image is actually content. Right. But what about the media query component? The media query should be, I mean, that's why we're trying to come up with all these different solutions and why the picture element and the source set look so ugly and actually why I like the clown car technique because it actually separates out the content. So, just for people who don't know the clown car technique, just like super, super Twitter size. Okay. Three tweets. It's basically, instead of using pulling in an image, it pulls in an SVG and inside the SVG, that's where all the media queries are. So, it pulls in the correct image based on the container of the SVG. And it works fairly well. It's basically a stop-gap solution while we're trying to figure out the correct solution. But the reason that I liked the clown car technique is because it actually separated out content from presentation, from behavior, from images. I think it absolutely is a problem that we're defining the media queries in the HTML. I don't think it's a problem of bloat. That's fine. I think we can give the HTML all the sources of the images as I was saying before. But I think the problem is illustrated in the case where, let's say, on a 500-pixel-wide screen, virtual pixels, you have an image at 100% width. On a 700-pixel screen, you might actually have that image at 50% width because you've got a second column come in. And, okay, fine. When I'm writing the HTML, maybe I'll take that into mind. But what if it's not an HTML generation problem. What if it's a render problem where when a user is logged in, then you have a sidebar. But when they're not, you don't. And in CSS, I can have the column at 50% or 100% if it's got a sibling with the logged-in sidebar. But I'm going to want a different image source to apply to that element. And that's why a technique that takes into account the actual width of the image element and the width of the screen, I think, is absolutely essential. And that's why I do still have a problem with the picture element. So I think you all will come in. Yeah. So this is alluding to, some people might have heard about element queries, which are kind of like, CSS applied to a particular container element. And it's like, it's pretty cool. It has its problems. So actually, there are several questions here. First, for the separation of content and presentation, I think it's a problem. I think that this is something that can be resolved by drying out the media queries out of HTML and into some sort of media query variable. So drying out being... Don't repeat yourself. Thank you. Basically, creating variables that say mobile or whatever that means, or basically create named media queries and use them instead of the actual media queries. Wherever you have media queries in your markup, the picture or external style sheets or in the CSS as well. There is work in the CSS working group regarding that. I have no idea when it will go in, but people are working on it. And so regarding that, I think that will resolve most of the issues from this separation of concerns point of view. Regarding... Hang on. So the question I have is like, okay, so with picture is probably the main offender here. Should we even bother continuing to work on it in that sense? I don't think picture is the most visible offender, but again, there are style sheets. If I'll have my way, media attributes will be everywhere, which I probably won't have my way, but I think that there are a lot of resources that can be downloaded based on media. And I think we need to have some shortcut for media queries that we don't have to repeat them everywhere, including in CSS, because in CSS, we repeat them as well. And regarding the element queries stuff, first of all, I'd like to say that the main advantage of the clown car technique versus basically anything else is that it basically emulates element queries. The media queries there refer to the viewport of the SVG, not the viewport of the document. So while it creates some delay in download, it's sometimes necessary to... I mean, there are cases where it's useful, extremely useful. But the problem with element queries is that you cannot start downloading the resource before you have layout, which means you add a significant delay to the entire page load. So, Anne, do you have any comments about...? I mean, yeah, I'm still a fan of progressive JPEGs, and I think that... No, I think that there's definitely... The art direction case, it doesn't really handle, obviously. But I don't think that we should forget about it as a file format that we might explore. And we often do forget about progressive JPEGs. I think we forgot about progressive JPEGs for, like, 10 years. What is the reason we forgot about them? Who's... My browser doesn't really support them. Oh. I mean, I think that, you know, in general, there's pretty good browser support. I think that the reason browsers don't support them is because we stopped using them. And I think we stopped using them because things changed. We had, you know, faster connections. But then things kind of reverted with mobile, and we went back to where we were, where speed turned out to be an issue again. So, yeah. Cool. That's it. So there's one question from the audience. Go ahead. Sound like the best solution for the picture element is to move the... Basically, to move the rules to CSS. You can go back to regular IMG and have something that says in CSS, like, for all images which have a path that looks like this, then apply these rules to add .2x to the path. This allows you to move the more presentational parts of it to the CSS while still keeping the content, namely the fallback image in the HTML. And if you have... It can do something like a reg X to not have to write this 2x, that 2x, or something else 2x for 2x versions of every single image you have. So... And you can sort of do this right now using attribute selectors at the IMG tag. Using what selectors? You can sort of do this now using attribute selectors. Attribute selectors. Yeah. But once per SRC, which will be annoying. Right. So, yeah. So it's kind of like a mix of things. So... I'm sure one of these other guys will be able to talk about the concerns there. I can give a little bit. It's like... Some of the main problems that we're trying to solve with responsive images as a whole is integrating nicely into how browsers load images performance-wise. So... To block and wait for a star sheet to download that will give you the instructions to then be able to fetch the files that you need will probably cause issues. Part of the stuff that Yovl was talking about before about these CSS-based variables that you can put in is that you would need to insert them at the top of the document inline so then they would actually parse before anything else. So there's like big performance issues with all this. So, like, it's a cool solution. But, like, the... I'm not saying that it wouldn't work, but it's like working out how all the dynamics work there within the browser. It's pretty crazy. Does anyone want to add... I just want to add that it's basically violating the separation of concerns from the other side of the spectrum. You're putting... Basically, your content is now part of your presentation in a way because you... the content URLs rely on the CSS. So you would have to either... I mean, for CSS caching, if you would... content images tend to change often. Everything that's in CSS usually changes... I mean, less often than that. So it can be cached for a while, for a long while. So... It violates the layers as far as I'm concerned, in a way. So let's take that one up. So I've got another question from the audience and another one from... Oh, so I've got a couple here. I'm going to go there first and then I'll talk about... Sorry, you already have the mic, so go ahead and then I'll bounce you. Just to counteract that, how you say if you need to download the CSS first in order to display the image, but if the CSS modifies the size of the image, which can modify the art direction and the actual one that you want to use, isn't that important? It is. Like I said, it's all like a trades and balances kind of thing because you really... We are trying to keep the performance high. So there is going to be a penalty for everything. So where you defer, like you have already said, the separation is a concern. It's going to have issues, how the images are loaded and when. So if the things have to be deferred, then you might defer layout and that's going to impact the user's perception of whatever application you're trying to run. So again, we need to test a lot of this stuff. We don't know. It's still like, even though we've been talking about it, now it's really time to start testing some of it. I guess I just had a direct question for Peter. Sorry to put you on the spot. But given the kind of art direction perspective, I was wondering if you could talk a little bit about, from your standpoint, what is best practice right now? Given that it's not all just programmers trying to do stuff in an automated fashion, because every time I've tried to do that, I fail. And the art director looks at me and is like, no, we need to crop it this way instead. So I'm going to... I want you to answer that. But I think this is a great question because it's really about what should developers be doing today. And I think, like, John has some ideas. Anne's already talked about, let's try out this progressive JPEGs. Yov and I have kind of been working on the standards trying to look forward. Still it's been kind of experimenting with the client kind of technique. So we have five minutes. So like one minute each, what can developers do today starting with Peter? Okay, well, here's what we're doing. We're sending lots of the JPEG sources into the HTML as a JSON string on a data attribute. We've got JavaScript running. After the CSS has evaluated the layout, we'll pick the right source and apply it and on resize change it. Yeah, the performance isn't great for that. But to answer a little of what you're ever saying, that I think that having a default source there, low res, your kind of your best guess I think an okay way to go for what we have now. Okay. Anne, what do you recommend for developers? I mean, I think that, yeah, what we have now is a bunch of hacks, right? So that's what we have. I think that we should try and focus. I love that we're doing this and we're actually, I think what we're talking about are like big wins and solutions for responsive images. I think we should really be forward thinking and not forget all the different options and all of our different paths and explore them all. That's great. I think that all current hacks have performance trade-offs. All current hacks, basically, you're deferring the loading of the image to a later time in order to download the appropriate one. I think that things will look up soon. Things will get better soon with where there's currently work in the responsive images community group on an X-picture polyfill that uses web components in order to emulate picture. It won't work with the preloader, but assuming if you don't have any blocking script at the pages top, it should have similar performance characteristics as image. So current hacks all have problems. Future hacks will get better and hopefully, and then there's a source set. Hopefully it will ship in a release build soon enough. I guess the question is what developers should be doing now and what developers should be doing now is being concerned about what they're sending over the wires and making sure that they're not sending huge assets to limited bandwidth devices. And in terms of what we should be doing or the spec authors should be doing, we should also, I think, what I haven't heard and I just thought of while I was on stage, so I don't know maybe it has been, there's some CSS, there's the image element you can clip and pick certain areas. So maybe figuring out a way to do that on the back end or through client hits so that you're actually just downloading, based on that instead, downloading the whole image. Right, so just expanding on that one. So it's basically just taking, you have your normal image and then you select the area that you want to crop out and basically just crop it out with CSS. It's a good way of doing our direction and it fits quite well with compressive images as well. So we'll give several answers. For CSS you can just use media queries, switch out the right image. It's in a better state these days. For HTML, there's three different things. For fixed size images, we're just switching based on device pixel ratio. Compressive images, like serving a double size but highly compressed image, is reasonable. Source it will be nice, but only one's possible to support it. For viewport-based switching, where you need to take into account flexible images, I think the best solution these days is to load a very low quality placeholder, which you embed directly in your, which is directly referenced from your HTML. And then later on using JavaScript, you swap it out with an appropriate resolution image based on the actual image size or whatever. I would use the classic, I don't know if this works here, but there was like a low source, low SRC attribute on HTML. I just use SRC. So put the low quality one in SRC and use JavaScript to replace that with a better quality one once you've loaded that in the background. So you get kind of like a progressive thing, right? The page loads in low quality quickly and gradually becomes higher quality. So I actually released a library for this yesterday. It's on github.com slash johnmela slash respswap.js, very early stages. Finally, for art direction, where you actually need a different image rather than just a different resolution of the same image, you can't load a low quality placeholder because you don't know what image it's going to be. And so for that, you can use something like picture fill or whatever, but don't use picture fill for viewport switching. So picture fill just for people who don't know what that is. So picture fill is a, you can Google it, you'll find it pretty quick. And it's basically the similar syntax to the picture element, but done with divs and, I think divs and spans or something. But it basically works. But the downside is that the images don't load until the page is fully finished loading and you've reached them content loaded. And so your images will start loading much later than if you've got like a placeholder or something. Just to emphasize on that, picture fill should be used for art direction and not for resolution switching. I'd like to add one comment because the picture fill, the reason that it's an issue is because it's on DOM content loaded. We should make on DOM content loaded much faster. It shouldn't be taking 10 seconds to download your page. And that's one thing we should definitely work on. I think we'll hopefully cover it in one of the other sessions because that's part of the performance thing. So the rates of work around that that you would basically say, and I'm just wrapping up here for the session, but basically you would just say, my page is ready now. So as an author you say, forget DOM content loaded, but page is done. And you send like a fake DOM content loaded event that indicates to the browser, now I'm ready to do other stuff. So please join me in thanking the panel here.