 That was worth the hour it took to render. So in many ways, Polymer has been a sort of Tesla vehicle for the Chrome team, highlighting one path for how you can ship fast, high-performance, progressive web apps that work really, really well on mobile. But we work in a really diverse community, right? Like everyone is using different text stacks. And today, we wanted to talk a little bit about how you can use other libraries and frameworks, like React, to build fast, progressive web apps. And looking at what do you need to do in order to make things like React qualify, to build instant experiences on real devices? Flipkart are going to get up right after me to talk a little bit about their experience shipping React PWAs at scale and all the lessons that they learned. And we have a little surprise for you at the tail end of this talk that you'll see in a short while. So let's start off with the statement. Frameworks can be fast if we put the work in. I firmly believe this. I think that we're at a point where fast is not the default for a lot of libraries and frameworks. I think that a lot of framework authors acknowledge that we can do better when it comes to performance on real-world devices. But let's take a look at what's possible today. So this is Flipkart on a real device. They're doing really, really well. They're interactive in just a few seconds. They're shipping just the minimal functional code to get a route interactive very, very quickly. They're deferring a lot of the work that's not needed for this route to a future point in time. And they're taking advantage of techniques like code splitting and purple in order to accomplish this. Housing.com are similarly doing really great work in this area. Again, they're interactive in just a few seconds. But we talk a lot about speed and what it means to be fast at CDS. What do we actually mean by fast? So there are a few key moments when it comes to modern loading performance. And some of these metrics are things you might be familiar with. So the idea of first paint, first meaningful paint. But really, there are three phases here. There's the is it happening moment, is it useful, and is it usable? Now, we're increasingly trying to focus on the is it usable phase. So time to interactive. So at one point during loading is your app actually engageable by the user. If they tap on different things inside the app, can they actually accomplish things that are useful to them? And time to interactive is really, it's that point when I can tap and I can get something useful. Now, we're saying that ideally, regardless of what it is that you're using to build these apps, it'd be great to be interactive in under five seconds on a real device under real representative network conditions, so 3G. If you happen to be using service worker caching, you're going to benefit from sort of trying to ideally hit an instant repeat load, and your time to interactivity is going to be even better in those cases. So service worker caching definitely worth looking at. In this case, there's actually nothing on this person's phone screen, and I think they're going through withdrawal of some sort here. So Lighthouse has been mentioned. Darren mentioned it in his keynote. Lighthouse is currently one of the best ways to easily track things like time to interactivity. It includes a number of different performance metrics. This is a Lighthouse extension. It's also available as a CLI. But time to interactivity is included inside the performance audits. And if you want to take a look at how well you're doing, what I recommend is trying out Lighthouse over remote debugging, testing it with a real phone. It'll give you sort of an eye-opening look at your performance on real-world devices. So that's definitely worth spending some time on. So recently, I was very curious about how the React community were shipping down code, how they were tackling things like module bundling. So I put out this call on Twitter asking people, have you shipped React in production? And what were your experiences doing that? And I've published a little bit of the data on that. But let's dive into it. So what JavaScript module bundler do you use? The majority of people are using Webpack. This breaks down into sort of 65% of people are using Webpack 1, a smaller number using Webpack 2. But those numbers are increasing. And the rest of these numbers are sort of browser-fying other bits and pieces. So Webpack is kind of a big deal. Let's take that from this particular slide. I then asked people if they were using code splitting to chunk up their JavaScript. And I got a very surprising answer. I saw that 58% of people thought that they were. Now this surprised me, because when I talked to the Webpack community, when I talked to the Webpack authors, they were like, we don't think that any more than 10% of people are really using code splitting. And there's something interesting there. Maybe there's a breakdown in terminology. Maybe people are using code splitting, but not necessarily doing it the right way. And I don't kind of blame them, because configuring Webpack is so fun. It's the best. But I think that we have opportunity to improve that. Other concepts that people were looking at. So 11% of respondents said that they were exploring service worker support, so that's good. Love to see more people doing that. 14% were looking at HTTP2 and what would be involved in granularly shipping stuff down. And 19% were looking at tree shaking. So interesting stats. Now we mentioned the Polymer shock demo quite a lot. And the reason for that is it's using purple. It does really, really well on real world devices under 3G. So on Throttle 3G, this app is interactive in about 4.3 seconds, about four seconds. If you're looking at it with a really, really bad 3G networks or something with more packet loss, it's still doing pretty good, 5.8 seconds. We take a look at Flipkart and housing.com next. And between these two apps, I did the averages. And they're getting interactive in about 4.5 seconds. It's still fairly fast, fairly good. About 6.9 with packet loss. But they're still doing pretty well. So these guys are using basically all the tooling, all the performance best practices that we're encouraging folks to take a look at to ship these experiences down in ways that are going to ideally benefit their users at the end of the day. So here's the crux of the study. I ended up profiling over 150 React apps that people submitted over the last couple of weeks, breathed the numbers quite a lot of times. It was fun, so fun on real devices. And what I discovered was that the average React app in that survey was interactive in about 11 seconds. So there's quite a gap there between what's possible and where the average app is right now. With packet loss, we're looking at 12 seconds. Probably the worst app in that particular study was interactive in 24 seconds. So user is going to be in uncanny valley, just like tapping around the screen and not really seeing anything happening. So this is sort of a timeline trace of what the average React app built with Webpack looks like. In this case, I saw hundreds of kilobytes of script being shipped down just for a single route. A lot of it wasn't being used. They are using code splitting, but they're actually, it's taking eight seconds before all of the script and their common chunks are being shipped down. Thousands of seconds are being spent in parse and eval time. And for anyone that sort of followed Paul Lewis and Paul Irish's guidance over the last couple of years about trying to ship a frame in 16.6 milliseconds, well, these guys have got a frame that last 7,973. It's doing really great there. Great. We can do better. So first piece of advice is try not to keep the main thread busy. If you are someone that's shipping down really large bundles of JavaScript, it's gonna take longer to load, parse, execute, and run. It's definitely gonna peg the main thread. Now this advice comes with nuance. And nuance is something we often lack in these conversations. It's really tricky to pack it into a short amount of time. But basically, if you're working on a page that is not gonna be useful to your user in any way unless you ship that amount of script, you're probably better off shipping it. If you can, however, trim that down so that you're just shipping the minimal functional stuff that's gonna be useful to your users, please consider doing that. It's gonna help them out, because they're not gonna need you shipping like all of this script for the entire site or the entire app in one go. Other things that can impact sort of the main thread being busy and time to interactivity are sort of suboptimal back and forth between the client and the server. Sam Sikoni touched a little bit on the idea of JavaScript, parse, compile, and eval execution times being a little bit different between desktop devices and mobile. Here we have a meg of scripts, about 250 kilobytes minified, and the amount of time it takes to parse and compile it on what a lot of us, so I see a lot of MacBooks in the room, this is how long it takes to sort of parse and execute that on a MacBook Pro from 2015, and take a look at the difference, like how much our assumptions are broken when it comes to the average phone, something like a Moto G. This is taking about three seconds to parse, compile, and execute, and that's not even looking at load time. Like if you're trying to get interactive in under five seconds, this is just not gonna cut it. But all of this, again, it's got nuance. You need to make sure that you're measuring before you're optimizing, but you're ideally trying to make sure you're doing the right thing for users. Test on real phones and real networks. This is something that we've mentioned in a few talks at Chrome Dev Summit. I cannot stress enough how important it is to test on real devices. Emulation is only gonna get you so far. You can be testing with 3G throttling on, with CPU throttling on, on desktop. And the difference between that and the stats you will get out of a real phone are still gonna be multiple seconds. I think there are opportunities there for us to do better at its cooling level. But real devices have got different mixes of cores, GPU, memory. There's gonna be packet level differences for different networks. So do try to make Chrome inspect your best friends and use it. So when Alex Russell carries around all these phones, he's not crazy. Mobile web speeds do kind of matter. In fact, you know, on average, faster experiences tend to lead to longer sessions. And one of the, I think it was perhaps the double click report that recently published said that, you know, people that did optimize performance were seeing anywhere up to two times mobile ad revenue. So test on real devices, make real money. Let's refine this other idea. So less code loaded better helps everyone. This is another one of those items that requires nuance. But if you're able to load less code up for a route in order to get it useful, please do so. The nuance part comes again from that part of you may require more script. Me shipping down 300 kilobytes of script may be very different to someone else doing it. There's gonna be different parse and eval times that play there. So again, very important to measure. But let's refine this idea of less code. Load it better. We're gonna use Webpack. A lot of you may be familiar with what Webpack is for anyone that hasn't used it before. It's basically a popular JavaScript module bundler. It packs lots of modules into smaller bundles so you can ship them down to your users. And so we're gonna look at some of these ideas around the purple pattern and how you can serve these things down to your users. The first one is code splitting. So I've been talking about trying to ship the minimal code down to your users. Code splitting is sort of one answer to this problem of serving people monolithic bundles. It's the idea that by defining split points in your code from sort of view to view, for example, a route to route, you can split them into different files that get lazy loaded on demand. That can improve your startup time and help you to getting interactive much, much quicker. Now with Webpack, there are two ways of doing this. Actually, there are quite a few ways. There's not just two ways. With Webpack one, you can use require.insure in order to do that. Webpack is going to take a look at anywhere you're using require.insure and create sort of a chunk for you based on that. That's how you define a split point. In Webpack two, they currently use system.import from the loader spec in order to accomplish the same thing. I do believe Webpack are also sort of, they're a little bit future facing, looking at what else is happening in the loading space. But basically, these are two ways to do code splitting. There are great articles that cover this in more depth. There are other ways that you can do code splitting as well. The bundle loader is another option. If you don't like the pattern that you just saw on screen, you can actually use bundle loader and prefix the things that you wanna require in to your page and it will automatically wrap those things in to a require.insure for you and take care of the rest. It's also possible to sort of wait for that chunk asynchronously before you do anything with the code. And finally, if you happen to be using React Router, it's actually got really great support for working with require.insure. So this is a declarative option. It's also got a slightly more imperative one. But basically, when I'm defining routes here, I'm able to use an asynchronous get component. And inside there, I can say, well, go and please get me the user profile of you. And then I can go and do stuff with it. So it doesn't necessarily need to be included in a big monolithic bundle upfront. The next thing is the purple pattern. So Sam talked about the purple pattern yesterday. It's basically a pattern for structuring and serving progressive web apps with an emphasis. It's got a lot of emphasis on performant app delivery, maybe looking at the ideas of how you can more granularly do things at a route level, but it focuses very heavily on giving you a minimum time to interact with. So the idea here is push the minimal functional code for a route, render that route, pre-cache the remaining routes, and lazy load routes on demand as needed. Again, lots of nuance here. But we do have a guide on that that you can go and check out. Now, with Webpack, it's possible to do something a lot like purple using require.insure or system.import with an async get component, React Router. And there are a few different options here. So Sam talked a little bit about the differences between preload or H2 push. So let's unpack some of the ideas there. So link rel preload, if you haven't used it before, it's basically a declarative fetch directive. In human terms, it's a way to tell the browser to start fetching a certain resource because you as an author knows that you're probably gonna need it. Some people have done really interesting experiments here where they've used stuff like their Google Analytics to decide what routes should get preloaded based on the navigation paths of the user. But with Webpack, you can use things like Asset Webpack plugin in order to wire up chunks that are generated at build time up to your markup. There's more you can read up about link rel preload. I believe Housing may have mentioned some of their experience with preload earlier today as well. If you're exploring HB2, there's a really violently named plugin called aggressive splitting plugin. I'm not sure why it was called that. But this is another option for basically going a little bit more granular with the chunks that you're shipping down to users. Nuance again, different JavaScript engines might treat the way that you split things up differently. There are gonna be cases where in fact, shipping a larger chunk will just mean that it's able to stream that JavaScript in and parse and compile it a little bit faster than you going and fetching yet another chunk. So know that this exists, try it out if you're interested in the idea of H2 with Webpack. But Nuance once again. Now another piece of interesting data that came back from my research is that code splitting itself does not solve everything. In fact, I just focused on the apps where people self-identified as saying they were code splitting. What I found was that they were interactive in 9.8 seconds. So definitely not where we thought they would be, right? We expect them to be a little bit closer to those flip carton housing dot com numbers. What I discovered after profiling them in slightly more depth was that a lot of folks were shipping down chunks for a route that were 600, 700, 800 kilobytes of script. In some cases, 1.2 megs of script. And then they were lazy loading even more right after the fact for some crazy reason. But this is something, you know, I don't entirely blame people for it because our current tooling doesn't do an amazing job of highlighting these issues. It doesn't really put performance in your face. So ask yourself what's in your bundle. I think that it's very, very easy for us these days to NPM install the entire world. It's very easy to include more modules than we necessarily need when we're shipping down code for routes. But I thought that maybe it would be interesting for us to see what we could do about this at a webpack level. So I put together an RFC for an idea I call webpack performance budgets or webpack performance insights. And Sean Larkin, who's in the audience over there has actually been helping me with this. And I thought that it would be interesting to give you guys a preview of what we think could be a better way of highlighting some of these performance issues earlier on in your development process. So here is what the output you'd normally get with webpack looks like today. I've got a build here where I've got, I've got almost two megs of script in two of these bundles. And as a user, if I'm not really that familiar with web performance, I don't know that there's an issue here that I need to solve. It should be obvious, and these numbers are quite large on purpose, but it should be something that, maybe webpack could tell me I have an issue. So we looked at implementing a proposal that I put together and this is what it looks like. So you go and run webpack on your project and it includes this output for you. Let's try to unbundle some of the ideas that are here. So the first thing it does is it tells you if you have particularly large chunks in your bundle. So you'll see at the very top, instead of just listing all of our different JavaScript output in green, it's highlighted in yellow chunks that are particularly large and cross a specific performance budget that's defined by webpack as default. If it notices that you're doing that, so in this case, I've actually customized this a little bit and said that the maximum size for chunks is 100 kilobytes, it's going to tell you, it's gonna warn you and say, this is an issue. It also can highlight large entries. So trying to look at defining what budget are you crossing for an entire route or an entire view because you might easily have multiple chunks that can post something and you don't wanna be one of those people shipping down a mega script if you don't need to. So large entry tracking is gonna help you with that. And finally, at the moment in this proof of concept that we've got, we also highlight patterns. So if we see somewhere where we think you're gonna benefit from doing something like using code splitting, using require.insure or system.import, we'll tell you about it. Now again, this is a very early proof of concept. We've just been hacking on it over the last couple of days, but I think that we have an opportunity to work together with tooling vendors like Webpack to try solving some of these performance issues together in a meaningful way that will hopefully end up giving users better time to interactivity scores. So something you might also be wondering once again is, can I configure this stuff? And yes, you absolutely can. Using the performance object, you can actually set the maximum asset size, the maximum initial chunk size and turn on or off the idea of getting those hints. There's a preview available today that you can go and check out. At this point where all of the UX you've seen, you might think that that's a really long report in your CLI, but we'd welcome people to try out the proof of concept we've got today and let us know if it helps. Let us know if you've got any feedback on the UX at all. I think that this is just the beginning. So size alone is just one aspect when it comes to script loading performance. There's also things like parse eval times, execution times and so on. There are interesting opportunities for us to use this as a baseline for building up more tooling that then benefits all Webpack users. I'd love to explore at some point in the future what things like code coverage could even mean for these experiences. So that's our first preview. Please go check it out and let us know what you think. Now, another thing I wanted to recommend is there's gonna be a point where you're optimizing your progressive web app and you're gonna hit a point where you can't optimize the size of React down any further. And something that I found is actually really great for just swapping in is Preact, which is a much smaller, it's almost a three kilobyte alternative to React with the same ES6 API. I believe Jason Miller who worked on Preact is in the audience, so thank you, Jason. And a lot of the traces that I've done of Preact apps are showing them, like this is again on a real device with a real network. They're interactive in under five seconds. I was taking a look, so this is SourceMap Explorer. It's sort of a nice, a little bit like the bundle analyzer tools that Sam was showing in his talk. This gives you something very similar. So this is what my dependency graph looks like when I have React in place on the very right. So lots of stuff going on. When I switched over to using Preact and Preact Compact, this changed quite significantly. This is with almost the same API. Like I did run into one or two bugs, I will say that, and Jason kindly fixed them very, very quickly. But this is definitely something that I consider, you know, if you're running into places where you've tried optimizing your app down, you're still finding a bottleneck, Preact is definitely worth checking out. Especially if you care about your time to interactivity being small. Setting this up with Webpack is actually quite trivial. You can use resolve aliases to map React to Preact Compact, React DOM to Preact Compact too. Definitely worth checking out. Now, in previous years, Jake has talked a lot about offline and the benefits that you get from instant loading using ServiceWorker. And I'd like us to consider layering our app so the network is an enhancement a little bit more. When you do this, you're able to actually give your users those almost instant experiences on repeat visit. And you just, you know, you crush your time to interactivity. In this case, this is housing.com. On first visit on a 3G network on a real phone, they're getting content on the screen in 3.5 seconds. On repeat visit, it's almost instant. It's in under a second. And the amount of script and everything that they were trying to load up initially is no longer an issue. That's already cached using the ServiceWorker Cache API. And they're able to get interactive really quickly. So definitely something we're taking a look at. A lot of the time when we talk about progressive web apps, we talk about the application shell model, which is this idea of caching your shell and loading in content using JavaScript. There are many different variations of this pattern. This isn't the only one. But if you're trying to get ServiceWorker caching in place, I highly recommend the SWPrecache webpack plugin. This will integrate with your webpack build process. It'll generate a ServiceWorker that precaches your static assets like your application shell. And it just generates a hash of all your file contents as well. There's a lot of best practices for you out of the box. We're checking out, if you've tried vanilla ServiceWorkers, found that there's a little bit of boilerplate there and you like a tool that just helps you with the rest of your workflow. Jeff is gonna talk a little bit more about SWPrecache and SWToolbox in his talk. Now, another thing that Lighthouse tries to highlight is progressive enhancement. And I think that this is one of those super contentious topics. Luckily, I'm on stage, so I can't look at Twitter in any shape or form to see people's opinions on PE. But I do like this idea of supporting all of your target users using progressive enhancement and trying to target all the people that are in your market so that your app at least works for them. I think that progressive enhancement has sort of evolved over the last few years as we've gotten support for better primitives like ServiceWorker so that instead of necessarily optimizing for people that have JavaScript disabled, you're optimizing for network resilience. So if you're using patterns like purple, and again, purple isn't the solution to everything. If you're using patterns like purple, you can end up shipping so little code to users to get them useful that maybe things like server-side rendering aren't necessarily as beneficial in those places or as necessary in those places that you might need them to be. However, as Flipkart are gonna talk about a little bit later, there are still benefits to things like server-side rendering for SEO bots and there are places where you might need to get content on the screen quicker. For those cases, React supports this idea of sort of server-side rendering or universal JavaScript rendering. I'm gonna also have a really good story around things like universal data flow and data fetching. So React provides you this method called render to string for rendering markup on the server as part of its story around universal JavaScript. And it's this idea of you ship down your HTML, you then hydrate as soon as React and all of the rest of your components have loaded up, attaching event listeners and so on so that the person can actually interact with the app. So React has got a good story around this. This stuff is actually not too difficult to get set up as demonstrated by folks like Celio who are using server-side rendering with React. However, universal JavaScript has got issues. I don't think that this is something that's talked about enough in the community. I think it's something that we can probably share more data on definitely. It's very, very easy to get stuck in uncanny valley when you're server-side rendering where your user is in a place where they're able to see content. They can tap around it, but they can't actually really do anything because they're still waiting on the rest of your JavaScript chunks and your modules and so on to load up in order to attach those event handlers. Render to string has also got known issues around being synchronous. So it can affect things like your time to first byte. Streaming server-side rendered React can actually help here and I'd recommend checking out projects like React DomStream. Render to string can also monopolize the CPU and waste resources when it comes to re-rendering components. Component memoization can help there. So take a look at things like React, SSR optimization and other project that tries to help with this stuff. But don't consider things like universal JavaScript or server-side rendering with React as a given solution that's gonna be fast. It's very, very important once again. Consider there will be nuance here and it's important to measure. If you'd like to learn more about any of the stuff that I've been talking about, I recently published a series of articles called Progressive Web Apps with React and you can go and check those out. But I'd like to invite to the stage Avinav who's gonna talk about Flipkart's experience shipping production Progressive Web Apps with React at scale. Thanks Addy. So I'm Avinav Rastogi. I'm a developer on the web team that built Flipkart.com. I spent most of 2015 working on Flipkart Lite, a cutting-edge mobile Progressive Web App that some of you may have heard about in the recent times. And this year I've been working mostly leading the team, bringing that PWA goodness to the desktop side. So Flipkart, let me introduce you to it. Flipkart is the largest e-commerce site and a first-class, it's the largest e-commerce site in India and a first-class Progressive Web App across all form factors and browsers. And by that I mean across mobile and desktop. We've got the opportunity to showcase a new mobile website at CDS Chrome Dev Summit last year. And this is what it looks like now on the side. And it's virtually indistinguishable from our native app, both feature and design-wise. So Alex tweeted this today morning that for all of us coming from desktop to mobile, a change in outlook is crucial. Mobile is much less forgiving. And I wholeheartedly agree with this. Luckily for us, we were going from mobile to desktop. So we carried our learnings along and this is what a desktop site looks like now. So let me go over quickly the kind of technologies that we are using to build this. At a very high level, we are using a combination of React, React Router, Flux, Redux on mobile and desktop respectively, and a web app to bundle all together. Along with a bunch of other technologies that help us build this and sort of pack it together. So that includes like ES6 and latest JavaScript technologies, fetch promises, and Nord on the back end. So let me talk a bit about the architecture. At a very high level, both mobile and desktop sites for us have a very similar architecture. Let's see what that is. We use route-based code splitting on both. We have a smart preloading of chunks and we implement the concept of purple, which we have heard about. We have partial server-side rendering and a concept of build-time rendering on each and we have obviously service workers for caching different kind of resources. But an important thing to keep in mind is that the implementations for us are different based on the requirements. There are significant differences on how you treat, how you need to treat mobile and desktop users. The requirements are different, the user behaviors are definitely different, the attention spans are different, network conditions are definitely different, your mobile will hack and have a flaky network, 2G or 3G, desktops tend to have a more stable and a faster connection. Device capabilities are very different, as Alex mentioned yesterday, and browser fragmentation, of course, and distribution. For example, in India, the browser distribution on mobile is such that UC browser takes a fair chunk of the pie, a majority chunk, but on desktop, it's the latest version of Chrome, which takes a majority chunk. So how you treat development, in which one you target first and you add, like you have to take the least common denominator, you solve for the one which is probably going to cause you the most problems and you build up on top of that, supporting more and more features, treating things like network and XS CPU, things like that as a progressive enhancement. So let us look at the differences in implementation like I pointed out. On the mobile site, we have a concept of build time rendering, which essentially means that we build the app shells out of our code and we create static HTML files, which we serve to the user when we get a request. So there is no request time processing needed. It's a simple file. We have a service worker in place which caches that shell, and obviously, after that, it can work offline first. And for our mobile site, it's a composition of multiple single page apps, which I will talk about in a bit. On the other side, on the desktop, we have partial server-side rendering, that means we try to optimize what content needs to be rendered on the server. We don't have a concept of build time rendering, and we don't have a concept of app shells. Now, the reason for this is simply user's requirement and the user experience. I feel, and that's what we feel at Flipkart, that the user experience of an app shell can work really well on a mobile device, where you can show a header, a footer, and a loader maybe, and some content. But on a desktop, showing just a header and a loader still leaves you with a pretty big blank page. It's not a very good experience. So therefore, we went for a partial server-side rendering approach. Apart from that, we have a chunk response for our first request, the HTTP response, which allows us to achieve a faster time to first paint. I will explain that in a bit. And we use a service worker for caching other things like data and resources, like images and things like that. So here is the output from a webpack build. A webpack supports code splitting out of the box, like Addy was just mentioning, and it figures out the split points based on how you include your components. It also takes care of loading the appropriate chunk when needed, for example, when you navigate. The benefit here is that you significantly, you have significantly reduced the amount of JavaScript that you need to render the first fold of your page. Like for example, the screenshot that I've put up here, the combined build that we had for our website at some point of time was around 206 kilobytes. With code splitting based on routes, we were able to split it. For example, home page only needed 32 kilobytes of JavaScript to render. And similarly, other pages needed anything from 7 kilobytes to 100 kilobytes. This really helps a lot. But there is an important caveat here. As I said, webpack loads out of the box, webpack will try to load these files on navigation. When the route changes, it figures out, okay, this route is this, JavaScript is not present. And it has a map somewhere, which tells it, okay, load this JavaScript file. Which means it is downloading, eval-ing, and parsing that JavaScript after you have clicked on a link, which is a very bad user experience. So to solve that, purple comes to the rescue. Implementing these concepts of chunking, streaming, and code splitting, you get a picture which looks like this. The first one at the top is what you see before all these improvements for us. So you have got your HTML parsing in blue at the top, and all your static resources and JavaScript CSS starts loading when the HTML is parsed, and you get a render time of around 2,500 milliseconds, and a page complete around, a DOM complete around 3,500. With these optimizations in place, you get a first spend of around one second, with your resources loading in parallel to the parse of your HTML. This is achieved using things like preload, script defer, and similar things. But this only solves for first spend. What about time to interactive and meaningful content? So we think that your entire content doesn't need to be rendered together for it to be meaningful. For example, what we do is our first spend, our first render that we put on the user's page contains the search box, and it functions without any JavaScript. Which means that the user is able to interact with the plain HTML that we serve to him, which gets rendered even before any JavaScript has started downloading. Since most of our users, a lot of our users start their journey by searching and not just navigating and looking for products on that page, this really helps us a lot. So some major wins for us that we have seen when we did this migration, this adoption of progressive web app concepts on desktop and mobile both. Is that route based code splitting amortizes the high cost that you have of single page apps and frameworks over the session of the user. You don't load all the JavaScript up front, you load it across the session. Similarly, smart preloading of those chunks and using the purple concepts makes the experience seamless. User doesn't have to wait after clicking on a link for the JavaScript to load. Thirdly, chunk encoding allows us to download JS chunks while HTML is still being parsed. An interesting approach we took was that based on the user requirements that we figured made sense for users in India, we solved for repeat visits on mobile specifically and for first visit on desktop. Of course we care about both on both the platforms, but we decided to focus on one over the other. Let me talk about the impact now. So up to 2x conversion during sale events after we migrated to these because of the high speed and reliability and the benefits we have talked about of progressive web apps. We have a significantly reduced bounce rate. Interestingly, a lot of people have seen concerns around search engine optimization. How will the crawler crawl the website? What's the impact on SEO? After doing all this, we have seen a 50% reduction in time taken by the search engines to crawl a page and a 50% increase in the number of pages that are crawled by Google search. That's a significant improvement. Apart from that, we have also seen a massive 70% reduction in the tickets that are raised, the issues that we get on the website. There are lesser errors in general. Plus it's much easier and faster to develop and it's more developer friendly to get new developers on board, fix those errors for us to maintain. Of course, there are a bunch of gotchas. Webpack has been a super useful tool for us. That's what we use, as I mentioned. And its documentation is going through some very well-deserved improvements. So working with PRPL and code splitting, you're bound to run into a bunch of interesting issues. And Webpack does provide a lot of help to solve them, but some of it is buried really deep in the documentation. You have to really search for it. And mostly you find the answer on Stack Overflow before you find it in the docs. So the first issue we ran into was cross-region resource sharing and route-based code splitting. So an interesting thing that happens is, which might be true for a lot of us here, JavaScript files and static assets generally are served from a CDN which is on a different origin, as compared to your website. Now when you do a link preload, you can tell it to load it as a script and you can tell it to load it from a different origin, it's cross-origin, anonymous. And similarly, when you... So you can define that it's loading as a cross-origin resource. But when Webpack tries to load a script, like we mentioned, based on the chunks, when it sees it needs a new JavaScript, it will by default not load it as a cross-origin script and your browser may end up blocking it, which causes quite a lot of headaches. So interestingly, it does provide attribute or a config that you can specify which makes Webpack load those chunks as a cross-origin script. It takes care of that internally. A second one was, as we know, a cache invalidation is a very big problem, apart from naming variables, that when you create a chunk, right, and usually for long-term caching purposes, the name of the chunk, the file name, usually will contain the hash, right? That's how you determine whether this file is a newer version of, like, if the content is new. So now what happens is that when Webpack creates these chunks, it needs to maintain a lookup table that in your entry chunk, which is loaded at your page load, it needs to know that when this route is opened, this is the JavaScript file it needs to download. Now that you are, that file is gonna change at some point of time. So for example, you have route-based chunks, like I mentioned before, you have these 15 routes on your website and you have those 15 JavaScript files correspondingly. As each file is supposed, one of them changes. It's supposed you make change on one, like a product details page. Ideally, only that one JavaScript file, that chunk should get invalidated in the cache. Only that should be needed for the user to download again. Others should still be served from Service Worker or, you know, another HTTP cache. What happens is because that chunk has changed, its file name has changed, the manifest in the lookup table in the Webpack's entry chunk will also change, which means the entry chunk will change, which means the user ends up downloading extra JavaScript, which has not actually changed. So for that, a Webpack provides a thing called the Webpack Manifest. It's pretty simple. In the comments chunk plugin, you just define the name for the manifest and you end up with a separate file, like 500 bytes or something, which will just have that lookup table. And all your other, your entry chunk becomes independent of the content of your other chunks. So it's these kind of small things, which, you know, we ran into and a lot of you may run into when you're implementing these kind of things. So what's next for us at Flipkart is making things faster. So we are looking into things like HTTP2 for enabling push of these resources smartly. We are also working on AMP to make the first visit faster. So that's all from my site. You can reach out to me on this Twitter handle or my team at Flipkart.interscotech. It's great to be here. Thank you. So I've got one more thing. So I'd like to tell you a quick story. I don't have a lot of time, but I'd like to tell you a quick story about a small group of us, got to write some code for NASA. So a while back, a few years ago, NASA released a master list of software projects they've cooked up over the last couple of years. This is more stuff than you just run on your personal computer. It's like apps that would help with robotics and cryogenic systems and space simulations and all sorts of things. And they had these in a bunch of different places, GitHub, GitLab, SourceForge. It was all over the place. But it was just part of the government initiative to try open sourcing more stuff and it was kind of neat to see. So off the back of that, NASA released a site called code.nasa.gov that looks a little bit like this. The idea here was that at any time you could come to the site and you could take a look at what NASA engineers were hacking on in the open, which is kind of cool. But I discovered this on Hacker News one day and my friend, Sam Sikoni, also discovered it around the same time. And we tried looking at this on a real device and it basically crashed my phone. What happened was we ended up profiling this a little bit and there were a number of interesting quirks with this particular implementation. It kept the main thread packed for quite a long time. In fact, we ended up working on a number of performance audits. There's actually a performance audit that I'll be publishing shortly on this whole thing. But we ended up trying to make this existing implementation as fast as we could. This was sort of an angular one app. And at that time, you know, that framework wasn't really built with real mobile devices in mind at the time. And we ran into all these interesting issues like digest cycles taking up to a second. This particular app had 10,000 watchers for some reason. They had like a GitHub embed for every single entry. So they had like 300 or 400 projects listed on this page. And they had a GitHub embed for every single one so that you could go and fork the project. So that was like an additional 300 or 400 network requests for vaults. They also had like a ton of web fonts and other interesting issues here that I don't think is entirely, you know, not atypical of something that, you know, if you were new to this stuff, you'd probably run into some of these similar problems. And so we started optimizing this as much as we could, but we reached a point where, you know, we thought this just isn't worth it. It's probably worth taking a look at rewriting this thing. And I know that today I've been talking, you know, we've been talking quite a lot about React and Preact and other libraries. But I like this idea of best practices being automated. I think that some of the ideas we've talked about today around purple and code splitting and so on are things that we can do a better job of building in, by default, into today's tooling. I'd love to get to a point where things like create React app and Angular CLI and Ember CLI and so on. Next, JS, whatever it is that you happen to be using are considering some of these approaches and looking at where they can, you know, provide real improvements to developers so that we balance developer experience with user experience. So Polymer does this kind of well with the Polymer app toolbox. I consider it a good reference for how to do this stuff. It's got, you know, Sam and I think Taylor mentioned some of this stuff. So it's got purple with code splitting built in and lazy loading and offline caching and support for H2 server push. But using the Polymer app toolbox allowed us to actually ship a completely brand new version of code.nasa.gov. This is NASA's very first progressive web app that we deployed last night. Thank you. I've got to give big props to Frankie over on the Polymer team and Keanu, Hannah Lee and all the folks that helped us get this shipped. But basically everything here is faster. Here we were looking up sort of, you know, as you would code for the Apollo 11 mission from all those years ago, looking up ways in which, you know, NASA would publish projects or even share projects with other people. All of these views on a real mobile device perform really well. It's a massive improvement from what they had before. We spend a lot of time on things like making sure that the infinite scrolling for their project list view was really, really fast. So hitting 60 frames a second. And this experience works really great on sort of desktop as well. So the experience there is, again, it's responsive. We can see the list there and actually being able to search things really, really quickly. There's no lag in place, but all of the views work just as well there, just showing you a slightly different look and feel to this thing. But we profiled this using My House on a real device with a real network. And this thing was interactive in under four seconds. So under 4,000 milliseconds. We were really happy with that because we actually spent less than a week redoing the site. It's not a complex site by any means, but the idea that you could, you know, completely throw away an old code base and try exploring something like a purple pattern in such a short amount of time with a very small team was, I thought, kind of cool. So we really enjoyed hacking with NASA on that site. And I encourage you to contribute to code.nasa.gov if, you know, just being able to tell your mom that you hacked on NASA code is kind of neat. So that's always an opportunity. But it's all open source. This entire app is open source. You can go and check it out on NASA's GitHub organization. So github.com slash nasa slash code dash nasa dash gov. I am certain we will get pull requests from folks mentioning things we've done wrong, but I welcome all of those. So please feel free to, you know, check that out and let us know if there's anything we can improve. In closing, I hope that, you know, some of the ideas in this talk give us inspiration to perf the web forwards together because we're all in this together. I see browser vendors as being in a good place to tell you about the engine and the performance targets we should be hitting. I see framework authors and tooling vendors as being people that, you know, ideally want to make sure the developers are able to ship the right experiences that benefit their users and the experiences you're shipping for your users. So let's work together. I would love, you know, if you're working on any of this stuff, please talk to me. Please talk to us. And let's, you know, let's move things forward together. Thank you.