 Hello everyone and welcome to our talk FastTrack Amber with FastBoot and Embroider. I am Suchita. Hey there, I'm Thomas. So first let us do a little bit introduction for ourselves. My name is Thomas. I work at LinkedIn web framework team and during my work I interact a lot with Amber and I also enjoy contributing to the Amber community. Outside of my coding life, I enjoy making special drinks at home and I'm also practicing latte arts. I also love playing games. My all-time favorite is the game Transistor and I'm currently playing Final Fantasy 7 Remake and has been loving it. And next I'll hand it over to Suchita. Oh, thank you so much Thomas. I'm definitely going to ping you after this for the latte art. And hi everyone, I'm Suchita. I'm a senior engineer at LinkedIn and I'm also working with the web infrastructure team like Thomas and outside of work I'm a core member of the FastBoot working group and in my free time, I love playing cricket. It's a game very similar to baseball and super popular in India. And I'm a huge keyboard, mechanical keyboard fanatic and here is my latest creation where I switched my Gator and Brown Switch to Holy Pandas and updated with new keycaps and also during the pandemic since we all were working from home I uncovered a newly found passion which is gardening. Let me share a few creations of mine that I did last fall. I sliced up the tomato and planted it and on day 67, I started seeing the first blooms of tomatoes showing up and rest is history. We had so many tomatoes that we had tomatoes in almost all of our recipes and I also grew some Thai chilies in the last fall. So this is about us, what we do and what we did in our free time. So now let's get back to our main topic which is fast-tracking amber with FastBoot and Embroider. So last year I presented the journey of amber from 1.x to Octane where we saw the paradigm shift that Octane introduced in the amber community. This year, we will take you through a story of a team that migrated to Octane and are now looking to further improve their app with various aspects. We will first get to know the team and their requirements. We will then move on to understand how FastBoot powers the server-side rendering experience for the users. And then we will talk about how we can optimize at the build time with Embroider to further improve the user experience and also develop her experience. And last, we will talk a little bit on what is on the roadmap for FastBoot and Amber. Alright, Sushita, should we get started? For sure. So now let's begin the story. So Zoe is an amber developer who is super enthusiastic about amber and is leading the web team of an awesome team at an awesome company that uses AmberJS as its framework. The current status of this team is that they recently migrated to amber Octane and are super happy with it. However, soon after, they got a few new requirements which is very normal in our industry, right? Once we finish one feature requirement, we get a new one. So that's what happened with these people and let's see what were the requirements. So first requirement was to deliver faster content to the page. The next requirement is to improve the user engagement, especially for the countries with slower networks. The last requirement is to improve the SEO or the search indexing factor for a specific route. So looking at these requirements, the first thing that comes into our mind is server-side rendering. So what is server-side rendering? To give you a brief overview, let's take a step back and see how client-side rendering works. So when a client makes a request or when a browser makes a request for a page to the server, the server returns the HTML shell along with the JavaScript and CSS tags. Like you can see here, the body of the HTML returned by the server is empty. And once the JavaScript executes on the client-side, it renders the HTML on the client. On the other side, with SSR, when a user makes a request for a page to the server, the server takes in the request, runs the app in the server, builds the HTML, and returns the HTML to the client as a by-product. Hence, instead of an empty HTML shell or empty body shell, you will see a page source filled fully with a statically rendered HTML. In Amber, this is possible with a library known as Fastboot. So Fastboot has been there for a while and most of us know that it provides an SSR solution for Amber apps. But today, let me showcase what makes Fastboot so special and fast. So you can imagine the Amber application container as a stateless container that does not contain any specific data, user-specific information. Instead, it only contains the parsed classes, components, routes, registry information, etc. of your app. So when a user request comes in, it is associated with an application instance. So what is an application instance? You can think of an application instance as a stateful container as opposed to the stateless one that contains information like user data, services, models, etc. which is very specific to the user request or this request that came in. In the meanwhile, if there are other requests coming in, then instead of blocking the entire thread for serving the initial request, we put this app instance in the background using the async.io and spin up new app instances using the same app container that you see in the background. This is huge since we are cutting down the cost of revaluating the whole app for every request. Instead, we are just reusing the same application container and that is how we can serve multiple requests at the same time for different app instances concurrently. And once a request has finished serving, we destroy the app instance and the state itself. But one thing to note here is because these app instances are running inside of a sandbox, we prevent the state leakage between the app instances which is great. But with that being said, one thing to be mindful about is since the application instances leverage the same container, we should avoid putting anything on the global scope or on the class scope because it might end up leaking in between the instances otherwise. So that's one thing that we should be careful about. So now that we understand the mental model of how fast boot works and what are its benefits and how fast it is, let us see how we can measure the success of leveraging fast boot in our app. So Google recently announced a few core web vital metrics that provide the guidance around determining the quality of user experience on the web where LCP which is the largest contentful paint is used for measuring the loading performance, FID which is the first input display is used for measuring interactivity and CLS which is cumulative layout shift which is used for measuring visual stability. So for the purposes of this talk, we will be focusing on the LCP metric since that's where fast boot really helps. So what is exactly LCP measuring? It basically measures how much time it takes before the user sees the most important content on the screen or you can say the hero element of the screen. Anything that is not in the viewport will not be a part of the calculation. So if you see the scale carefully on the left that measures this metric anything above four seconds is considered to be poor. Now just imagine if a user has to like stare at a blank screen or a loading screen for more than four seconds, it's certainly not ideal and not a good user experience and it might potentially lead to the user abandoning the page. But if we could improve the time it takes to paint the content then we can certainly improve the user experience and give them faster feedback and of course better experience. So now let's see the role that fast boot plays in improving the LCP metric and connect these two aspects together with a side by side comparison of a world before fast boot that you can see on the left and after fast boot that you can see on the right. And here for demonstration purposes, both of these apps are simulated on a slower network just to show you the impact. So let's see what happens when the user hits the same route at the same time before and after fast boot. So here you can see on the right that clearly that the fast boot enabled app is already showing up the content whereas we are still waiting for the non fast boot app to show up. Now it is showing up. So the LCP measurement for the after fast boot use case was 2.9 seconds whereas in opposition the non fast boot, I mean the non fast boot at case is 9.5 seconds. This clearly shows the value that fast boot brings to the table. This would not only give users a better experience but will also help retain the number of users who left the page due to seeing a blank screen. So now let us understand why did we see such a shift of amount of time taken before and after fast boot to load the initial paint. To do that, let's see the timeline of the before fast boot world first. Before fast boot, it first waits for the HTML and CSS required for the page to load. It then loads the JavaScript required for the page and then it makes any required API requests. Here the articles call like you can see in the example to make the first page load and that's where you see the LCP or the first content showing up on the screen. However, with fast boot, the timeline is a little bit different. Here we can see that as soon as the HTML and CSS is loaded and the browser has finished the parsing, that's where immediately you'll see the content showing up on the screen or the LCP is marked. This is happening while the JavaScript files are still getting loaded. This means that we are not blocking on the render until the JavaScript loads but rather we are showing the content or flushing out the content as soon as it is available. Since we now know how fast boot is so beneficial, I would specifically highlight one to highlight one aspect of fast boot which helped with this demonstration very much. To do that, first let's see the before fast boot use case. Here if you see carefully, we are making two XHR calls namely articles and tags where the articles call is required to display the list that you can see on the left and the tags call is required to show some data at the bottom of the screen. This is normal, right? You make API calls and then render its response on the screen. This is a normal scenario. However, after fast boot, I'll show you something different. Here you can see, surprisingly, we are not making the articles call but you're still seeing the data on the screen. Just so you guys know, for the purposes of this demo, we have deliberately added this tags call as lazy loaded just to show that fast boot gives the developers a choice whether they want an API call to be made on the server side or they want to lazy load the network call just on the client side. So, okay, coming back to our example, like we can still see the articles list showing up without making the XHR call, what is making this happen or what's enabling this? That's where the concept of shoebox comes into the picture. So, let me walk you through how shoebox works and then I'll show you an illustration of how it looks like. So, when a user makes an HTTP request, fast boot receives the request, it runs the amber app on its side and then it begins the process of serving the request. In the meanwhile, if the app has any API calls needed to be made for serving the initial page, which is articles in our example, then fast boot will make that call to the data center and the data center will respond it back with the data and here is where the real magic happens. Now fast boot will also store this API response in something called as shoebox. So, now on the client side, when the app again loads and the time has come to make the API call for articles again, instead of making the XHR call, the client will query the shoebox instead and say, hey shoebox, do you have the matching response to this particular API request and if the shoebox has a matching request, then it's just going to return there and there itself. So, you can see the clear advantage here that instead of making an extra network call, we are eliminating that, not only that, we are also responding with the data as soon as possible like almost immediately, so the render time also improves. So, now let's see where do we store this information and how does the client get this information from? We basically embed the API response inside of the index.htm itself so that the client can later query. And this is stored inside of a script tag named shoebox, which you can find here on the screen. So, this was a little bit about what shoebox is and how it works in the nutshell and all of this is great, right? We saw all of the awesomeness that fast boot has to offer but now we come to the main question, how hard is it to adopt fast boot? Or how hard is it to adopt all of the concepts that I just showed you before? Do we need to make a lot of changes in that? Well, good for you, it's super simple. All you need to do is run this simple command Amber install Amber CLI fast boot and boom, your app is fast boot enabled. We also have an opt-in feature for smoother rehydration on the client side. So, what I mean is currently when you run this command, when server side responds with the HTML, Glimmer VM will re-render the whole DOM tree on the client side again. However, if you enable this experimental rehydration feature, which is experimental render mode serializes true, then Glimmer will only update the parts of the HTML that requires any correction that needs on the client side, which means things will be faster if you start using this experimental feature. Now to leverage shoebox, you can add a package called amber data storefront and extended in your applications adapter layer, which will do all the wiring required for your app to be shoebox ready. So, this is everything is awesome. And now let's see if the use cases or the requirements that the team had if we were able to match that and address those. So, let's do a quick recap. So, the first requirement was that they needed fasted content to the end user and by improving the LCP factor, we were able to achieve that. So, that's great. Next, by displaying the content sooner on the page, it showed better engagement from the users due to early rendering, which addresses the next requirement. And finally, since SEO is supported by fastboot inherently, the search indexing requirement was fulfilled as well. So, now everyone is happy. A few moments later... Surprise! The new requirements coming in after a few moments. So, this time, the good news is the company and the product is doing great and now we are going to build more features and more pages. But the problem for the team is since we will be building more pages and writing more code, that means we will have more code going into the JavaScript bundle, we will ultimately be delivered to every user's browser. So, the time to a fully booted up client-side Ember app will increase. So, the team is thinking about can we ship only the JavaScript needed for the initial page that the user is requesting? Therefore, when we are working on some new pages, we will negatively impact the performance of our homepage. And there is an answer from the Ember community that is to optimize the build of the Ember app. So, let's introduce in Boiler a modern build system for Ember JS applications. So, there are a lot of optimizations in Boiler provided to your Ember app. Let's talk about some of the features. The first thing I want to highlight is code splitting. So, traditionally Ember CLI build pipeline what it does is it will look at all the Ember code in your application and all your dependencies and Ember CLI will transpile those code and ultimately build into two separate JavaScript files. One is for your own applications and the other one is for all your dependencies combined together. What in Boiler does is it starts to take advantage of the non-adopted ES module syntax by the Ember community which is the important export syntax you have in your components and helpers all the files. What it does is it starts a static analysis for all those modules and figure out what are the exact code that you needed for a specific route. For example, for our homepage we have the application route, the index route and the services and the templates for it and for an article route which the user won't necessarily go into for the initial visit we can put it in a separate chunk and all the components that's only used by the article route will go together into that chunk. So this time when the user is visiting the initial homepage we won't be delivering all the code for the article route to the end user and you can also choose to include any route together with the initial bundle if that route is important and the user might go to it immediately. So the benefit of doing the route-based code spreading is we can deliver a smaller initial JavaScript asset to the end user and when we are working on another route we won't be concerned about impacting the performance for the homepage. The other thing I want to call out is trace-shaking so imagine you are using an Ember add-on which provides a lot of useful helpers or components for your Ember app but the problem is not everyone is going to use every functionality provided by the Ember add-on if you are only using two of the helpers provided the add-on traditionally the Ember cell-line will bundle all the helpers into a single JavaScript asset which you won't necessarily need it. What Inboarder does is it will analyze the import what exactly are the measures, the helpers or the components you are using and only package those into the final chunk and only deliver those to your end user. So this is great so now you may be wondering how would I use Inboarder so let's see how you can enable Inboarder to your Ember app the first thing you need to do is of course install the packages from Yon or NPN and then you can start leveraging Inboarder in your Ember cell-line build.js which is everything the build happens for your Ember app what Inboarder does is it starts a multi-stage build and the first two stages is to make sure all your Ember add-ons are compatible with a new build pipeline and it compiles out all the Ember specific code then you can really start leveraging all the benefits from a wider JavaScript community in this case we are using the webpack bundler to bundle all our JavaScript asset files you can use the options provided by Inboarder to progressively opt-in to all the optimizations Inboarder provide you you can turn on the static add-on and help your component options to tell Inboarder that you are not using the dynamic component syntax so Inboarder can safely perform the static analysis and build out all the chunks for your files and then you can start enjoying the route-based code splitting by using the Inboarder router what it does is it will read what are the routes you wanted to split away and then when the Ember router is running when it enters a new route it will fetch those routes from the server you can choose which route to split or which route to not split so you will have the full control and the benefit of enabling Inboarder also provides you the ability to use dynamic import so comparing to the static import which is all the import statements you write at the very top of the JavaScript file you can use import that returns a promise everywhere in your component or any other JavaScript code once the promise results you get access to the module you want to use so you may, during the runtime you have the full control of the exact timing you want to import that module fetch the module, the JavaScript file from your server and start using it later and now you might ask can I use it today for my Ember app anything so the question is yes you can start using it and we have been using it at LinkedIn for a while and we can share some of the benefits we are seeing when enabling Inboarder on a large LinkedIn application and we are really seeing Inboarder start to bring faster speed to our end user when Inboarder is enabled we are seeing a 35% reduction on the initial JavaScript size reduction and when the JavaScript is delivered to the end user's browser the final pasta size of all the code combined is almost half of the original size the reduced JavaScript bundle size translates to a faster page load time so what we are seeing is from the beginning to a fully booted up coincide Ember app there is a 6% reduction for the page load time which is roughly 200 milliseconds on a simulated slower network and most of the benefits are as expected from coming from a faster download and parsing time that's where the Inboarders, Cosplating and all the static analysis bring us all the metrics we are seeing are collected by Tracerbench which is a control performance benchmarking tool we use a lot at LinkedIn we use it to ensure we don't have any performance regression on the hood it runs the initial page visit multiple times and give us a statistically confident result in this case Tracerbench tell us it has more than 95% confidence that Inboarder is indeed making the app much faster and the benefit for the developers is also obvious for us after enabling Inboarder we are seeing the code build has dropped 58% of the original 100 seconds and the most impressive is the live reload has dropped from 3.5 seconds to less than 1 second so our development speed is also skyrocketing to recap after enabling Inboarder we are shipping less code to the end user which translates to a faster page load time and as a bonus our developer experience is also improving because of the faster build speed this is awesome I think I can see the team already celebrating they seem to be super happy with the fast boot and embroiderer adoption so Thomas let's talk about a little bit more of what's next in the world of fast boot sure there are a lot of things we can do to make fast boot and Inboarder experience much better so few things in my mind we have a fast boot app server which is a production ready out of box server side deployment option for people using fast boot and we want to add worker stress to it so we can further improve the server side throughput and we also want to support streaming back the initial HTML so people don't wait all the HTML get generated on the server side and start shipping it back to the browser which will bring us a better faster boot app we also want to make the faster and Inboarder combined experience a bit better right now if you load not the homepage but another page from the browser what Inboarder do is it will fetch the JavaScript file for a separate chunk after the Inboard app boots up what we want to do is we want to do this part in the fast boot world as well if you load the HTML it can in parallel fetch all the necessary JavaScript for the Inboard app to boot up awesome and just to give you guys more information we are also planning to stabilize the rehydration feature and enable it by default for all the users and also have a better testing infrastructure just to make it easier for the apps to test against fast boot and of course we want to continue the effort of improving the developer experience for example adding more instrumentation in place enhanced documentation etc cool and here are some more resources for folks to check out you can learn more about fast booting at its homepage and Inboarder at its GitHub repo there are a lot of instructions and more details about it and all the code we used for the demo app we showcased today you can access on GitHub to see it yourself and we really hope fast boot and Inboarder these modern toolings can provide better Ember developer experience and real value for our end user and we really hope you are excited as we are for the future of Ember oh wait but that's not all we have one more thing as a bonus we are hosting a virtual fast boot meetup where we will do a deep dive on fast boot and its various concepts and aspects please feel free to sign up for the meetup on emberfastboot.com and we can't wait to see you all there thank you very much everyone