 This talk is based on a series of articles for a list of part eponymously named, which is the sort of collection of ideas and techniques about getting JavaScript performance under control, but it's wrapped in a philosophy. And the philosophy is, let's just try to use JavaScript a little bit more responsibly with a little more thoughtfully. So if you like this talk, you might like these articles. There's two parts. I wanna get two more parts done this year. So if you wanna find them, links will be in the slide deck and it will be posted to noticed when I'm done noticed, which is an awesome speaker tool. So I'll post it to Twitter. Okay, so to start this off, I wanna talk about a word that I stumbled on years ago called speciousness, which is an unusual word that has some relevance to our work. And to be specious means to exhibit these sort of deterministic or pre-programmed behaviors. Think of like things that aren't willful behaviors. They're kind of just automatic things that are done, right? And so the root word spex is a name for a genus of solitary wasps. And I promise you that the organizers didn't stick an entomology talk in the middle of a JavaScript conference. We'll get to the JavaScript. Don't worry, it's a whole thing. So these wasps don't just act in a pre-programmed fashion. They're also really easily manipulated and that's part of what makes them specious. And the way that they're specious is, is when wasps provision their larva, which they usually give them crickets, they'll bring the prey back to the nest and they begin this routine. And then before dragging the cricket into the nest, the wasp will leave it outside and then it will go back in to inspect the nest. And this behavior seems thoughtful, but it really isn't because if an observer moves the cricket, it will set the, if before, if the observer removes, moves the cricket before the wasp reemerges, it will set the cricket back to where it was before and then go back in and inspect the nest again. And you literally can do this endlessly. You can just do this over and over and over again and the wasp will never catch on. So I didn't come to Omaha to be a big jerk and apply that you're all mindless. We're thoughtful, web developers are thoughtful people, but there are some decisions involved in our work and I'm guilty of this too, that we make without question. And for example, or an example of this, when we begin a new project, we open a terminal window and we install a familiar framework and then possibly a client side router for the framework and then possibly a state management library for the framework, of course. And all the while we're kind of unaware or have even made peace with the overhead that these conveniences bring. They're conveniences that help us to do our work, but they have a direct and felt impact. This matters because the amount of JavaScript that we serve has steadily increased over the years to the point where it has become, in my opinion, probably the top performance concern in most websites. Certainly something my clients run into. And half the sites you visit will send 375K or less of JavaScript. The 75th percentile will send at least 650 and the 90th percentile, meaning 10% of websites out there will send at least one megabyte of JavaScript. And these graphs are generated from the HTTP archive, which among other things tracks the transfer size transfer size of JavaScript, which is often compressed. While compression is essential to loading performance, it doesn't change the fact that when a megabyte of compressed JavaScript is sent over the wire, the browser decompresses it to probably three times as much, depending on your compression ratio. And that is a significantly large bundle that browsers must parse, compile and execute and that takes time. And if you're using a high-end device on a fast network, you probably won't feel how slow this can be, but on less capable hardware, such as this affordable, but much, much slower Moto G4 Android phone chewing through tons of JavaScript is a real slog. And that's worth paying attention to because when devices or networks or both are slow, using the web becomes more difficult. And at the bottom of this webpage test timeline is a main thread activity indicator. And when it's red, the browser can't do anything else, just can't do anything else. And when it's green, it can like take on other tasks. And you can see here for two, four, sometimes six seconds at a time, the main thread is blocked with all this scripting activity which is indicated in yellow. There is some render stuff in there, but it's mostly scripts. You pair this with a slow network on a slow device and you can imagine how tiresome the web really can be for a lot of people. And so understanding constraints is a key to writing good software. The best video games ever made were a megabyte under that usually, sometimes far less. Some video games were like within a megabit 128K. Game developers of the time not only had vision but they also understood the constraints on their work. But their constraints were fixed to the hardware that they produced systems for. If you made a Super Nintendo game, your constraints were so static, it was anyone who has a Super Nintendo, that was it. But our constraints are far from fixed. They're very different from person to person. And in some ways that makes our job much more difficult than theirs. But that doesn't mean that we can't make great experiences in the web that work for everyone everywhere or even make them without JavaScript. Like we can use JavaScript, but we just have to think a little bit more about what we can do to get just a little bit faster. So let's talk about how we can turn that speciousness into anti-speciousness for the good of the web and for all who use it. So there's this cool phrase I came across recently. It goes, paint the picture, not the frame. And it comes from an article by Eric Bailey on a list of part about accessibility in UX. And it's a clever way of saying that we really shouldn't reinvent things that the browser already does well, like buttons and forms and all this other stuff that we rely on, that we often paper over with JavaScript sometimes. And Eric in his article advises us that we should not subvert a person's expectations by changing externally consistent behaviors. Examples of external consistency might be default behaviors of HTML elements or the appearance of a scroll bar even because when we disrupt these things, when we disrupt external consistency, we may impede people in ways we didn't expect or they didn't expect. And one way we do this is when we fail to use semantic HTML and instead rely on JavaScript to re-implement or approximate those behaviors. And this can result in websites which are harder to use for those who rely on assistive technology. Let's take this example react component which is a newsletter subscription form. And this component has an input field, a corresponding label and a submit button and it's all in a single div. And you may have some opinions on what's wrong here but the solution doesn't require more JavaScript. It actually requires less. So let's dive into this JSX real quick. There are three things wrong here. One, a form isn't a form unless it's in a form tag. And tips are not intrinsically flawed. They lack semantic meaning by design but this is a form. A form should always use a form tag because that has meaning to those on assistive technology. And two, when we label inputs, a label element should be used with a for attribute that corresponds to an ID on the input. And this lets assistive technology know that a given input has an associated label. And that's not just for people who are less able. They actually make forms easier to interact with for everybody because you can click on the label and it will focus the field. And three, while divs can be coded to behave and look like buttons, doing so robs the button of any semantic meaning it would otherwise have if it was just a button, a button element. Think of that, huh? And plus a button elements default behavior within a form is to submit that form. That means it's more resilient for when not if JavaScript fails because your JavaScript's gonna fail to run somewhere. So here's the refactored markup with all of that advice of which every part now has semantic meaning that assistive technologies can use and assuming the component is server rendered, it will also continue to work if the scripts fail to run. And now note that the submit event handler has moved from the buttons on click event to the forms on submit event. This is helpful for when we wanna intercept a forms submit event, if we wanna enhance this forms behavior with client side scripts later on. Now here's the final component code. Additionally, because the email validation is now handled through HTML, we can remove the email validation script entirely because we're using an email input type with a required attribute. So we're relying on the browser to validate those things for us. But of course, you should always sanitize your inputs in the server, but this is an opportunity to send less client side code. And then the opportunity where you can remove some client side script and get things a little bit lighter should be like just joists, you know, like throw it away. If you don't like it, throw it away. So external consistency isn't limited to HTML and CSS and JavaScript. We expect browsers themselves to behave in a predictable way. And one of the most common subversions of this predictability is the SPA or the single page application. Don't throw things at me yet. I don't hate SPAs, but the navigation behavior they replace is one that browsers already do very well, even if it's synchronous. And so when we embrace client side routing, we take on a whole host of new responsibilities that we didn't have to think about before because the browser managed it for us. History must be managed. You have to account for scrolling position and tab index, navigation canceling can fail. There's just a whole bunch of stuff that is specified behavior that you have to think about. And even if we get client side routing perfect, performance is affected if that content is not server rendered. And furthermore, when we fail to send contentful markup from the server, the pages contents will be inaccessible if JavaScript fails somehow. And so when we rely on standard synchronous navigation behavior, we do lose a degree of snappiness, but we retain that external consistency that people have come to expect. That's not to say that client side routers are always bad, dirty, dirty, filthy things, but using them requires extra care on your part. And for example, you'll need to provide server side equivalence to all of your client side routes so that people have a way to reliably access that content from any context, whether it's from within your site or from Google or like a site link from somewhere else. And then if components are attached to server side markup through client side hydration, people get a progressively enhanced experience and that's where the really special shit is. So, sorry. If you want to avoid SPAs but want to make navigation snappier and you want to rely on a sort of platform provided solution, link prefetching may fit the bill for you. It can seriously boost loading performance by fetching page HTML in advance of the user requesting it. It's not perfect. You could potentially waste data if it's not done carefully, but to address these potential shortcomings, the Google Chrome team offers a very small link prefetching script like teeny tiny. It will only prefetch links as they appear in the viewport using an intersection observer when the main thread is idle and if the network isn't slow. So I know I'm getting my prattling on about all the free stuff the browser gives us but the point remains, the browser gives us a lot for free. So let's use that stuff whenever possible so we can focus on the harder problems of web development. Another tenet of my responsible JavaScript philosophy consists of what I consider to be a fundamental truth that we need to acknowledge, which is that the tools are not infallible. A hammer can help you build something or it can break all of your fingers. You have to understand how the tools work and knowing that is a part of creating fast and accessible websites. You can't just assume that the tools do the hard work for you or do everything optimally for you. And one tool many of us use that when we need the JavaScript we write to work everywhere is Babel. Babel is valuable, but we tend not to see how it can harm performance. We would all benefit if we could transpile less because the way Babel transforms our code can add a lot to our production volumes. And it helps to know how Babel transforms the code we write so we can compensate for its inefficiencies. So here's an example of a, it's just sort of a very simple example of an example like kind of console logging wrapper function which accepts message and level parameters. And the second parameter is the log level which has a default of log. So like, you know, that's the console method log, warn, error. Default parameters are nice, but Babel transforms them very inefficiently. And worse yet, it repeats that inefficient transform every time the default parameter is used just to accommodate that developer experience that you wanted to use that gets passed on to the user. So if we can't avoid Babel altogether we should try to compensate for this stuff. So in this particular case we can avoid this specific transform by replacing the default parameter with an OR check. And when we want to assign a default to an optional parameter we can perform a check where the left side of the OR is the parameter itself that's provided by the user or by us rather. And the right side is the default. And if the level parameter is omitted the right side of the OR condition is used and Babel will not touch this. They won't transform it. But that's not the only thing Babel transforms. Default parameters are just one such feature. Let's take ES6 classes as another example. I like them, they're nice. They kind of make classes as you know they resemble like classes that we've seen in other languages but the way Babel transforms them is really expensive just to get this to work everywhere. And if you can't read it, that's fine. That's kind of the point. It's a lot of extra code. You can mitigate this cost in one of a few ways. You could use the prototype pattern and avoid ES6 classes altogether for those platforms that need this transform. You could use Babel plugin transform runtime to deduplicate this helper that Babel uses to reduce the impact across an entire project. Or three, if you only need to support modern browsers you could drop Babel altogether. If you can do this, it's your best bet. But if you're using like something like JSX you're probably never gonna be able to drop it. How we write JavaScript though is isn't the only thing to consider when we're using Babel. We also need to configure Babel as well like configure it itself. And so here's an example webpack bundle analysis for an example app which uses a Babel configuration that isn't finely tuned and it sits at about 117 kilobytes. And you'll notice most of it is comprised of polyfills. Polyfilling is something we use Babel a lot for. It's useful. It helps us to kind of like fill in those missing gaps for a lot of the things that we write for older browsers. And if you're familiar with Babel preset M this configuration code may look a little familiar but it is worth taking a second look at that use built-ins option which uses core JS to polyfill features. So when use built-ins is set to entry core JS itself must be added as an entry point to the project which adds more polyfills than we might need. But if we change the value of use built-ins from entry to usage we can remove core JS as an entry point in the app and Babel will only polyfill the features that are actually used in our code that are appropriate for the specified platforms. This can seriously reduce how many polyfills get used. And while we're here there's another option I think that deserves some attention which toggles something called loose mode which is when Babel transforms your code loosely this means that Babel's output it hears less strictly to the ECMAScript standard. Loose transforms are a bit smaller and or quite a bit I've seen projects reduced by 10% simply by enabling this and they can be enabled by setting the loose option to true and Babel preset M but it's not bulletproof you could have issues if you move from transpiled ES6 to untranspiled ES6 later on if you remove Babel but if the savings are worth it you can always address this in a sprint if the problem comes up so you can retain your gains. And after making these two quick configuration changes we reduce the size of this bundle by 52% and that's a big deal. With half as much code this app will be faster especially for devices with limited processing power and less memory. In addition to all these little configuration tips and hacks a novel way of serving less JavaScript today is a hobby horse of mine but it's been a thing for about the last year it's called differential serving which involves serving one of two bundles to users based on the capability of their browser and what this means is that legacy browsers will get bundles like we generate them today with more transforms and polyfills while modern browsers will get smaller bundles with little to none of those things. And the outcome is that an app functions identically in either case but for modern browsers it involves substantially less code. So of course we need a way to load these bundles properly and what you see here is how we've always loaded JavaScript pretty straightforward. The pattern shown here is how we can differentially serve scripts and so in here the first script tag loads a bundle for modern browsers by adding a type equals module attribute which ensures that this script gets picked up by those browsers. The second script element loads a bundle for the legacy browsers. No module ensures that modern browsers will decline to download the legacy script it just goes, no, I'm not gonna touch that. But legacy browsers don't understand no module so they will download this script anyway. Configuring your tool chain to do this is involved but it is doable. It's kind of like you have to fork your configurations from the same entry point. And so that starts with creating two different Babel configurations. First, this configuration is sort of typical of what you'd see in a lot of projects which transform code that's compatible everywhere including old browsers. But now this is a configuration for generating bundles for modern browsers and you'll notice that use built-ins is gone and the reason for that is because in this particular example app we didn't need any of the polyfills for CoreJS because we're using stuff that the modern platform provides. But depending on the language features or other features you use you may need to retain use built-ins. Instead of a browser's list query we've supplied an option named ESModulesSetToTrue which under the hood this translates to a browser's list query for browsers that support ES6 modules. And what's great about this is that if a browser supports ES6 modules we can infer about a lot of other things that it supports like async of weight, lat and const, arrow functions, all that stuff and so forth. So we can group these configs together under an ENV object in our Babel config. And so client legacy is the config for legacy browsers while client modern is a config for modern ones. And then on our bundler config we can point to these separate configurations. And in Webpack this is a typical example of how Babel loader ensures that scripts get processed by Babel. Now note the ENV name option which points to the Babel configs that we've specified in the previous slide. And by creating a separate Webpack config and pointing to the client modern Babel config you can generate a smaller bundle of your code for modern browsers with identical functionality. And that's important because you don't sacrifice any features for this. It's all gain from here, it's all gains. And so the size reduction between these bundles really depends. Sometimes you might only get five to 10% but some projects could see more. I've seen some components for clients that can be reduced as much as 40% in some cases. And so this is a bundle analysis of the same example app legacy bundle which is pretty small around 68K. But with differential serving we can go from small to nano and deliver this app to modern browsers and 40% of the size of its legacy counterpart without sacrificing anything. It works identically. But beware some browsers may have issues with this platform provided pattern. Remember type equals module and no module. There's a lot about it and I can't cover it in this talk but I have an article here on my website at Jeremy.codes that kind of explains it and it gives a workaround that I've been able to use in production for a fairly prominent electronics retailer. And so it's been battle tested and it is working and it doesn't have any of the issues described in the article. Finally, this leads us into a discussion about what it means to be accommodating. And this is sort of my favorite part of the talk because it gets away from like the real techie parts of JavaScript that can be kind of fatiguing and talks a little bit more about adapting to people which is something that I really is really kind of a hobby horse of mine. Because I feel that when we deploy something to the web we have to kind of be a steward of that thing, right? Like we kind of have to be the gatekeeper to like say this is how this should be maintained and we should keep our users at the top of mine. So in the US, and this is relevant because in the US many people live in large cities which are typically well served by fast broadband and mobile internet connections. Yet this article by the MIT Technology Review revealed that 58% of households in the Cleveland metropolitan area with yearly incomes under $20,000 had no broadband internet access. These are people who rely on mobile internet connections to do that work and those usually come with data caps and some of them come with overages not all of them are unlimited. It just depends on your provider. And more striking to me is this passage in which Pew Research found that one third of Americans do not have an internet connection in their homes that's faster than dial up. I doubt, I sincerely doubt since this article was written in 2016 that the picture has really gotten much better. And the reason why is because the economic and infrastructure challenges that people face and that we all face have not been sufficiently addressed to broaden broadband access for everyone. So if you're serving a lot of assets high latency or low bandwidth can make your site functionally inaccessible to a large group of people. And thankfully a technology called client hints which is supported in Chrome and derived browsers edges chromium now and basically anything that's blink derived which is a huge chunk of the web will support this can help us bridge this divide. So client hints help help developers to understand the characteristics or at least the approximate characteristics of a person's device and the network it's connected to. There's a lot of client hints. There's probably at least 10 of them but I don't have time for all that. I did do a talk about it last year and I wrote a big article which I'll show you or link in here. But here are three that I feel are the most useful. The first is RTT or round trip time which is the approximate latency of a user's connection in milliseconds. Downlink is the approximate downstream bandwidth in kilobits per second. And the next is ECT or effective connection type which is an enumerated string that categorizes the user's connection based on both the RTT and downlink hints. It sort of tries to coarsely categorize a user's connection to like in a very like course buckets 4G, 3G, 2G that sort of thing. These hints really help us to understand people and what their situation is like. And that can help us tailor experiences so that we send less stuff to those on slow connections. We can opt into these hints with the accept CH request header. You could set that in like a meta HTTP equiv or on the server in whatever function you use in a language to set request headers. And we can also tell the client how long we want those hints to persist with the accept CH lifetime header. It's sort of like how long they should be available sort of like a caching directive but for these hints it doesn't catch the value of those hints just the hints that we want. So in this above example the RTT downlink and ECT hints will persist on the client for one day. Then you can access these hints as request headers via a server side language. So here for example in PHP don't throw things at me. That's the one I use. We initialize a variable with the default effective connection type of 4G because not all browsers support client hints. So we want to assume the ideal experience. We want to assume a fast connection. But if client hints are available we can check for that ECT header and if it's been set and we can read it we can overwrite that variable with that value so then we can get an idea of what the user's connection is if they support client hints. And so with that information we can create lighter experiences for those who need it most. For example we can decide that a person will only see a carousel if they're on a fast connection. Otherwise we compensate by sending them only what they really need like the critical core content that people actually need. And I call this adaptive performance. And it's a way to create experiences that I feel are not just more adaptable but more inclusive. Like it gets people, like it bronze access for people a little bit by being aware of the shifting network conditions that people run into. And not just shifting network conditions could be device conditions. There's a hint for example for how much memory a device has which we can use and decide well if it's a low power device maybe we should try cutting a little bit of script because that's memory intensive. And it works. Here are two versions of the same site. The left is the unclient hintified version and the right is with client hints applied. Like we're on a slow connection and we're applying this philosophy to a site. So there's a lot of web fonts, a carousel, accordions and JavaScript to run it all which is functionally inaccessible on 2G taking 90 seconds to load like 740 ish K. But with client hints we can boil this experience down to its core when networks are slow or devices are slow. And for our trouble affected users will have something that they can access more quickly than the ideal experience we had in mind. It's like a compromise. All right, there's a potential you'll still get this ideal experience but otherwise if you can't accommodate that we're gonna boil this down for you. And if you wanna learn more about client hints you can check out this guide I wrote for Google Web Fundamentals called Adapting to Users with Client Hints. I also did a talk version of this last year at Fullstack Fest that just kind of goes it's the play-by-play of this article you can check it out if it's something you're into. The links will be embedded in the slides when I post them. And so I'd like to close this talk on what I think is a very important point which is that we first need to figure out what people want from what we built for the web. And by that I mean like what purpose are we trying to serve? What task are we trying to enable people to accomplish? And then we need to work backward from there and build something which serves that purpose with precision and care. That's really important. Don't try to get in people's way when they need to do something simple. They need to like wire money to a friend. They need to apply for public assistance or what if it's somebody who has been tossed out of their home by an abusive partner and needs to like turn to the web to find a place to find shelter. Those are the moments where we just need to get out of our own way and stop prioritizing our developer experience and help people because that's what partly that's what the web's for. To me the web is for more than just buying shit. It's for helping people too. So regardless of our profession, craftspeople love their tools and as developers we're really no different. We take pride in building great things with the tools that we have. But unlike say the mechanic who fixes your car the tools we use can have a direct and felt impact. When a mechanic fixes your car he doesn't throw all of his tools in your trunk and you're carrying that weight around. But when we do the work a lot of times the tools we use can kind of just end up as baggage on the user experience that they kind of have to carry around. We don't need to burden people with the entire toolbox or even the entire tool shed in some cases. Sometimes it makes more sense to use these smaller tools which are more focused on the actual work. Think alternatives like preact instead of react which is a 10th of the size but does pretty much the same thing with the same API. Alternatives to moment JS or really anything that you've used probably has an alternative if it's popular enough. Your experience, I'll acknowledge this, your experience as a developer is important. But it is never more important than the user's experience. And if you're excitement for a certain set of tools causes you to build things that no longer efficiently serve the purpose you've set out to achieve it's time to reevaluate them. And it's my hope that eventually we can come to find our own ways of serving our collective purpose with utilitarian precision for the benefit of all who use the web. Even, and this might be a little unpopular even if that sometimes that to get there we don't always need JavaScript. Thank you.