 All right, so I'm here to talk about Ember, but I'd first like to invite you all to join me on my professional network, which is LinkedIn. But yeah, I'm actually here to talk about Ember. So Ember has been really successful at allowing small teams to build high-quality applications in a relatively short amount of time. But what would it look like if you took that same framework that allows these small teams to be extremely productive and you unleash it on a massive engineering organization? So the name of my talk is Ember at Scale. As Luke mentioned, my name is Chad Hitella. I'm a staff engineer at LinkedIn where I work on a team called Presentation Infrastructure and we're primarily focused with building client-side solutions to help product teams build products. And so this is our story about picking Ember, the things that we considered and some of the problems that we solved along the way. But first, before I get into using Ember at LinkedIn, we first need to understand where LinkedIn came from. And to do so, we have to at least go back to when I joined the company. So I joined the company in 2013. It's almost three years to the day, actually. And at that time, LinkedIn had about 3,600 employees and we had about 275 million members. At this point, we're still not like a small company. We're actually pretty large. But for anybody who's familiar with LinkedIn knows that it's a much bigger company than 3,600 employees today and we have 400 million members. And so one of the lessons that I quickly learned when I was at LinkedIn, I had come from a startup was that any company of this size needs to have a very good product and technology balance. So as I mentioned, this was new to me because I literally rode the JavaScript hype train of 2011 and was able to ship a backbone, angular, and Ember application into production at the startup. And so I mentioned this because the first project that I worked on was used JSPs and I had never used JSPs up until this point in my career. But not only was I using JSPs, I was actually working on a 12 year old system that talked to an Oracle backend and this was actually the original tech stack that was built when all the co-founders were in Reed Hoffman's living room getting the first version of LinkedIn out the door. So me coming to LinkedIn, I'm like, what year is this? I came from this place where I was doing a lot of things with JavaScript and I was really surprised by that I was building something in JSPs. And so there's this quote, if it ain't broke, don't fix it. Clearly this is by somebody that is not in the JavaScript community because we tend to reinvent the wheel about every six months or so. But when you start thinking about this rationally, it makes sense. When you're getting tens of millions of users to sign up year over year, the technology is not the problem. And you only really need to reassess the technology when it's hindering your ability to create new products and being able to iterate on existing ones. So 2014 rolls around and just in this one year we went from 3,500 employees to 6,000 employees, so almost doubled. And we also added about 60 million users on during that first year that I was there. So lots of people, both on the employee side and on the user side, or member side. But it came pretty clear to us that it was becoming harder and harder for us to iterate. We had a lot of software rot. And so if you looked at any given page on LinkedIn.com, you would see that there was YUI, JQuery, Backbone, Dust, JSPs, and then a bunch of internal libraries that were used to cobble together a user interface. Not only that, we didn't have a proper dependency management system. So what it really meant was, when you're creating a new product, was we had this method called li.namespace, and what it allowed you to do is register a namespace into a global object. And that's basically where you wrote your product. So in 2014, I joined a team to help reduce this complexity and unify the tech stacks that we were using to build user interfaces. And so like any other serious company out there, what you end up doing is getting some type of design and then building that design in n number of frameworks. Just to kind of get a feel of how the framework works, what problems does it solve for me, and how does it actually attack the problems. But Horst.js kind of like puts it more elegantly. It's always versus, Ember versus somebody. But I think this is more towards the benchmarking blogs that you'll see about, hey, let's take the virtual DOM and benchmark it against Ember. And we kind of saw some of this earlier this morning of comparing and contrasting frameworks. And benchmarks are really good, and it's important to be benchmarking the application, or the framework that you're gonna pick, but it can't be the only thing that you consider. So at LinkedIn, we have to really consider about how the organization really thinks about technology and how they think about building product. And one of those things that comes to mind right off the bat is all the technology that we use at LinkedIn that's used across many different products there is centrally supported by a team. It isn't a single team because you have various concerns. So there's a team that works on the REST framework. There's a team that works on databases and data pipelines. And there's people like me that work on clients. But the idea is that if you're going to build an application, you have a set of tools to build from. And so having the shared infrastructure solves about 90% of the problems that developers would have if they were to start with nothing. And all they're really responsible for is building the product on top of these shared solutions. There's actually a really great talk by Steve Jobs from like 1997 that is talking about pitching cocoa and how it solves a lot of these common problems that no developers actually wanna think about and all they wanna do is build their product. And the analogy is basically that cocoa is a scaffolding for applications and it starts you off on floor 26 instead of floor nine when you're going out to build your product. And we most definitely have a similar goal at LinkedIn. We wanna share as much infrastructure as possible. And so because we're doing shared infrastructure, less is actually more for us. And the reason for that is like we really like to holistically solve problems. To give you an example of this, internally tracking across all of the applications has to be done in a very spec'd out way. There's a schema for all tracking events. And when you scale, you wanna make this as dead simple as possible for developers to comply with tracking at this scale. And so what we did internally was we built an number add-on for tracking events and once you install it, you're 90% of the way there to being compliant for tracking. You just basically have to instrument your application with the event names of where you're going to do tracking. But things like actions happen for free. You just need to supply it with the tracking key that you're doing. And then the other thing that we really had to take in consideration is that the only change or the only constant is change and the side is a little bit backwards, but you get the idea. And so there's multiple levels to this and this is true about many things in software and in life. But the thing that we really concentrated on was that we allow people to move around from team to team in the organization and it's really great for people's careers because you get to work on very interesting problems or the problems that you're most interested in. But it does come at a cost when you don't have shared solutions to problems. And this was very much the case when for the client side stuff, I had worked on three teams in the first year and a half that I was at LinkedIn trying to figure out what I wanted to do. And this was clearly a problem. And the other thing is that JavaScript community is kind of crazy and web development in general is crazy as I alluded to before that things change quite often. And so we wanted something that evolved over time but made certain guarantees about stability in the API when releases were going to actually happen. And so we ended up picking Ember and we think that Ember aligns how LinkedIn thinks about building products and how we think about managing technology. And so Ember allows you to scale because you have shared solutions and conventions to solve common problems. So Ember provides you with enough scaffolding that starts you off on floor 23 instead of floor six but you're probably gonna still need to solve some pretty interesting problems at the infrastructure layer. And so now I kind of wanna talk about those interesting problems that we faced and how we attack them in the hopes that we can take these ideas and instill them into the community and maybe build floors 24 and 25 so that we as a community can have shared solutions to these problems. So Tom and Yuhuda talked a little bit this morning about initial render performance. And this was something that we thought quite a bit about before we even started building our Ember application at LinkedIn. We feel like we should be able to leverage the server to help solve some of the problems in an initial render. So I'm gonna walk through on how we did that. So we created this piece of infrastructure called the base page renderer and it's otherwise known as the BPR, not PBR but you get the idea. And it's responsible for serving the index.html of an Ember app. But not only is it responsible for serving the index.html, we look at the base page renderer as a performance enhancement on top of client-side applications. And the reason why we look at it as an enhancement on top of client-side applications is that the BPR can run in three different types of modes. It runs in vanilla mode, which is probably the mode that most people are familiar with. It's like how you would normally build an Ember application where you have an index.html come down from either the server or some CDN. Inside of that index.html you have some script tags. The script tags will go out to the CDN again, you'll get some assets back. Once those assets come back, they parse eval, you start booting your application up, you then go back to the server to fetch some data and then finally you come back to your application and you go render the screen. So that's the vanilla mode. And then there's server-side rendering, which I'm not gonna go too much into because Tom kind of basically talked about fastboot this morning, but we do have the ability to run the BPR in SSR mode as well. And the last mode that we can run the base page renderer in is this mode called big pipe. And I wanna spend a little bit of time to talk about big pipe, because I think this is an idea that not a lot of people are talking about in ways that we can leverage some of the infrastructure that's been built for fastboot. So the term big pipe was actually coined by Facebook in 2010, so at this year this idea is like six years old. But we need to get creative on how we're going to build these applications in the future and I think this is a good way of doing it. But anyways, what big pipe actually means is that you're just going to keep the HTTP connection alive and you're gonna chunking code the HTML that you're sending down to the browser. And so this is actually part of the HTTP 1.1 specification. It's been in browsers for a very long time. And so to give some visualization of what chunking coding does for you is when you do server processing for like let's say a normal website, there's two ways that you can think about rendering or sending out the index.html. You can do what is known as store and forward where you're just building up the document in memory and then once the document is complete you send the fully materialized document down to the browser. The other one is chunking coding, which is if you are processing, every time you're processing a chunk, you're actually streaming it down to the browser once it's ready. So this gives you the ability of like this concept of a stream and you're streaming down HTML to the browser. And so this isn't like a web socket or anything like this, this is HTTP streaming and you're just keeping the connection alive. And so knowing about chunking coding or having a better understanding of that, we have a diagram that's like this and this is kind of like a high level architecture of what we're doing at LinkedIn. And so we have the browser, we have the BPR and the BPR actually has a version of the Ember app inside of it, which is the same Ember app that is running in the browser. And I'll talk about how we have those guarantees later, but just for now it's the same application that's running in the browser that's in the server. And then off the side we have an API server. So when a request comes in from the browser, it goes directly to the BPR. In this case we're asking for LinkedIn profile one, two, three. And so what happens then is the BPR will almost immediately return with the head of the document. And so inside of this, I'm not sure if you can read it, but it is like the start of the HTML tag, it has like any of your CSS that's inside of the head tag and then any scripts that you're gonna download. So if you're using Ember CLI, it's the vendor JS and the app JS. And so while you're doing this, the browser can make the CDN request to get those assets before you close out the connection. So we call this the other term for this type of technique is early flush. And what it means is that you flush early so you can start downloading assets in parallel. So after we sent down the initial chunk of HTML, we actually use the fastboot visit API inside of the Ember app. And we say, hey, Ember app, I want you to visit profile one, two, three. And in doing so, the Ember app is going to route to profile one, two, three, and some things are gonna happen. It's gonna probably want to try to make some API requests. So when it does make an API request, we want to actually take that request and proxy that to our API server. And we actually wanted to do a little bit more performance work in this area by parallelizing more work. So Nick Iconis, who is somewhere here, actually wrote this thing called Ember prefetch. And what Ember prefetch does is that it introduces a new hook into your route. And what that hook is is that it's non-blocking. And what I mean by non-blocking is in a traditional Ember application, you have the model hook and you return a promise out from the model hook and it's gonna wait until that promise resolves to move on to maybe you have nested routes. It'll move on to the next route and it'll do the same thing. You'll make an API call, the API call will come back and then move on and drive the state forward. With Ember prefetch, you just call all the routes. And so this is from a tool that we have internally called the call tree. And so the top one, if you can see, this is running the Ember application inside of Node. And these are the requests that the API server is receiving. So the top one, you can see all the calls that are done in serial, where the bottom one has Ember prefetch installed and we're able to parallelize some of the work that we wanna actually do. So this saves us processing time on the server and it will also save you time on the client. So that being said, we come back over with the data that API returns with the data and we reenter the Ember application. Because we're using promises, you could always have a dot then on the end of the promise chain. So it means that you actually have to go back into the Ember application to resolve any potential pending promises. And so this is like normal Ember stuff. But at the same time, we tee off. And what we tee off with is a thing called the datalet. And the datalet has all of the API requests information inside of this datalet. So it's all of the data that you're going to need on the initial request when your Ember application boots up. And so we have a little library on the client that acts kinda like a network proxy and it captures all these stream responses. And when the Ember app tries to attempt to make a request, we match it up with a stream response and now your Ember application makes no HTTP, Ajax request when you initially load the app. And so what we see with this is we get one to two seconds on initial render by embedding the API request into the index.html. And so I did a thing. I took the Ember fastboot server and I was able to kind of do this type of pattern. I do not recommend that you put in production as a complete proof of concept. But the end goal is that you'd be able to do something like this where you do Ember fastboot dash dash resource discovery. And all this would do is run the Ember application in the data center and embed the data into the index.html. So I really think like this is a pattern that we can build around and have shared infrastructure around. We've been doing this in production now for I think like six months. The other problem that we kind of had was we needed to serve the application. Luke Melia who announced me also gave like a really great talk last year about deploying and serving applications. And I would like to kind of share our approach now that you kind of have this complication of you're doing server side rendering. So how do you actually serve an app that way and how do you actually deploy an app that needs a server component? You can't just rely on a CDN. And so to go in a little bit more detail, I'm gonna show like what is actually happening inside the BPR. And I wanna talk about this because I think this is a really good way on how you can leverage your existing technology be it like a Python stack or a Rails stack, basically any other language that isn't node and how you can leverage things like fastboot. So we went with what is called a sidecar architecture and this was kind of championed by Netflix. They've done a lot of work in this area and I think they have some libraries that allow you to do this. And what allows you to do is run your main process so your existing application side by side of the new technology that you wanna use. And so how this works is and I'll get into more detail is basically you can use both of these technologies at the same time. So what happens is when a request comes in to the BPR, it hits this thing called the base page controller. The base page controller is responsible for, it's like a catch all route or a star route that is just going to talk to this library called the renderer. And what the renderer does is it takes the request and it calls into this other library called the process manager and it associates that request to a node process. If you guys were in Felix's talk about how Electron works, it's very similar to how Electron works. So the process manager is gonna spawn a renderer process or a node process which inside of it you have an Ember renderer. And you're gonna talk to this spawn process over what is called an IPC stream or an inter-process communication stream. And so this allows you to send messages between these two processes. And so the Ember app is going to issue some requests like the Ajax request. And so what is actually happening inside of the Ember renderer inside the Ember application is we write an initializer that swaps out the Ajax method that tells, that knows about how to make proxied API requests to our server. So like it's seamless to the user. They, we just, at build time we were able to produce a build that does the swapping. So we talk in between the node process and the JVM process. In majority of the time all the Ember app wants to do is just talk to the API server. So it's going to make a bunch of API requests like I had in the last diagram and we are going to re-enter into the Ember application to resolve any pending promises. But then finally we have to return and so this is kind of like the life cycle of the BPR in much more detail. But I really think like I mentioned before it's a good way to leverage existing technology. So all of our networking libraries are all in the Java space but we wanna use like the fastboot APIs in the node side. And so finally deployments. We don't, as I've said we don't actually put the index at HTML into a CDN. What we do is we have a build infrastructure that basically when you're building your Ember application and you're trying to like commit some code we run the Ember CLI build and at the end you have like your normal Ember CLI assets. We publish those to a CDN but then internally we have a system to kind of associate your client side application with a BPR instance. And so what we do is we take the assets from the Ember CLI build, we G-zip them up and then we place them inside of the BPR and the BPR is the actual deployable unit for us. So the BPR is actually a shared piece of infrastructure at LinkedIn. It's currently being ported to a library so that you can use the shared idea across all the applications at LinkedIn and this is kind of like what we're striving for. And so now I kind of want to talk a little bit about V8 and it's more particularly any device that is going to run V8. And so we have done a lot of work in this area but one of the largest problems that we had was getting visibility into what Android phones are doing or V8, anything that's running V8. And so while the profiler and the timeline tools are really great, they're samplers which means that they're gonna look at many different things and then kind of give you a general picture of the world. But we really wanted to have a high level certainty that we were moving performance in the right direction. And so I work with Chris Selden who is on the Ember Core team but he also works at LinkedIn. And so he created this library called Chrome Tracing that basically can launch Chrome with all of the V8 flags on and JavaScript flags and these are giving you very detailed numbers of what is actually going on in the JavaScript renderer inside of Chrome or basically anything that's running V8. And while you can do this by yourself and I mean this is I think how Chris normally works but not everybody can like parse what is coming out of the V8 logs. And so what he did was it parses all the V8 logs and gives you a much more readable format of things like how long did you spend in functions, how long did it take to initial render, CPU usage and so all these type of metrics that you really wanna know when you're developing your application. And so this kind of helped us identify some areas around like script eval and how long it's gonna actually take us to get it to initial render. So the Chrome team actually has a similar library. It's called telemetry and while it's great and it has a lot more benchmarks in it than we currently have with Chrome Tracing, it's all written in Python and what we wanted to do is create a tool that allowed our developers who primarily work in JavaScript to be able to iterate on that and add new benchmarks to Chrome Tracing and that's we would also like to take like community contributions for having a generic set of benchmarks. And so this doesn't have anything to do with Ember specifically but it is a really great tool for benchmarking your applications. So I think in 2016, there's a lot of interesting problems. A lot of them were kind of enumerated in the keynote talk this morning of like how are we gonna do offline? How are we gonna do server side rendering with like rehydration? How do we get to like 50 or 60 FPS animations on mobile devices? But I think as a community, we can continue to solve these types of problems in elegant and scalable ways. LinkedIn is climbing the same mountain with all of you and we're really excited to be part of this community. So I was gonna say one more thing but I'm really hoping that oh yeah, catch is on but maybe not. So we built an application. This is using all of the infrastructure that I just talked about. You can go to linkedin.com slash M. Sorry for the download the app interstitial. Thanks. You can find me on Twitter at chat high tele. My question is how do you pull the data out of the index at HTML? What is, what do you have to do on your Ember app? So I don't know if you were able to see on the slide there. So we, I don't think I can even pull it up. So what there is is that we inside, when we flush the head of the document, we have a little library that's called the BPR client. And when you are flushing the script tags down to the browser, it has a reference to that library. So it calls like BPR dot add resource or something like that. And all of those things get queued up into a big map on the client. And so the Ember application also has code inside of this to kind of know, hey, I need to drain some queue. That's probably gonna be there when I boot the app.