 What a great conference so far. Perhaps you know me from my open source work. I've worked on a number of different projects. Perhaps you've used some of them. I've also done consulting with Tilda, and I recognized a number of our clients here, which is wonderful to see. But the beginning of my career actually was completely different. I studied Naval Architecture and Marine Engineering. And the very first job I had was developing software for safely loading cargo onto ships and also salvaging ships that had become damaged so that, say, holds could be flooded so that the ship could be righted without it breaking into and sinking. And as you can imagine, fault tolerance is a very important concern in Naval Architecture. Ships undergo extreme stress. It can be a brutal environment on the sea. And of course, ships must be tolerant of operator fault. This is literally a picture of me at the helm of an Exxon oil tanker. But don't worry, I don't think the engine was on. So in order to engineer fault tolerance in ships, three-dimensional models are made before ships are constructed. They undergo finite element analysis. Stresses are placed on these models based upon predictions for, say, 100-year events, 100-year storms. The worst possible conditions that a ship might encounter. And every aspect of a ship is engineered for fault tolerance. This is a schematic of an engine room. You can see there's a lot of redundancy. These are duplicate fuel oil purifiers. You might be surprised to learn that ships even carry a spare tail shaft in case the primary one is sheared and it needs to be replaced at sea. And to engineer around possible operator faults, controls have to be laid out as clearly as possible. Obviously, they require a lot of training to use. But captains and pilots need to know exactly what controls do what. And the state of the ship, all the systems on the ship have to be conveyed to that central control room on the deck. Because ships are so big and valuable, they also need to be classified by regulatory bodies. And every aspect of a ship's construction and design is analyzed before it can be flagged and insured. I left this world behind when I had to move back east. And in the late 90s, I was probably the only one leaving San Francisco to start a career in web development. So the 90s were pretty fun on the web, but maybe not so serious. And engineering practices left a little bit to be desired. But we were pretty accepting because there was more good than bad. And frankly, what we were shipping to browsers wasn't that complicated. It was markup with sprinkles of JavaScript to enhance the experience a little bit. The serious engineering and fault tolerance was expected on the server side. And with web servers, database servers, of course, the security and reliability of data was still important. So let's fast forward to today where we have a much different environment, much more rigorous engineering practices across the full stack, and wonderful new front-end technologies like Ember.js, which really flip the model from sprinklings of JavaScript to almost a JavaScript core with the sprinklings of markup. What we're building with Ember.js are complex, sophisticated, independent systems that we're launching into our users' browsers. And after the Glimmer demo, I had to update the slide to represent Ember better. So our Ember apps let our users zip around at their whim within our application's domain, getting to the places they need to go as quickly as possible, with as little interference as possible. But make no mistake, these are complicated systems, and fault tolerance needs to be a primary concern for us. And the environment in which we launch these applications can also be severe and stressful. We have multiple devices to support, multiple browsers, sometimes browsers with JavaScript disabled, sometimes with internet disabled, sometimes with the browser itself disabled. And we have users with little to no training and perhaps overly optimistic, too. So when I think about fault tolerance, I like to think about the user experience we want to provide to those users so that they're shielded from any environmental stresses that our application encounters. And when I think about the primary concern of providing a fault tolerant user experience, I like to think of it as a transactional user experience, much like a database has to operate transactionally in order to be reliable. It needs to have these characteristics in order to reliably commit data. Transactions must be atomic, consistent, isolated, and durable. This is the so-called acid test for transactions' reliability. Atomic means that transactions must be all or nothing. If you're editing data on a form that's complicated that represents multiple models, all of that data needs to be either saved together or discarded. You shouldn't, in this case, save the contact without saving the changes to the phone number, even if under the hood you're representing these two aspects of a contact with different modeling. A transaction should be consistent. It should move between valid states. This is a particular challenge for server-rendered apps that push partial fragments of markup to a page. And it's pretty easy for those partials to become inconsistent. This is not a problem for frameworks, such as Ember, in which there's a canonical data model that's driving the bindings in our templates. Transactions should be isolated as well. That means they should allow concurrent changes. Changes, say, if you're editing that contact and you're providing a submit and a cancel button, you're providing a contract with the user of your application to say that you're editing this contact in its own context. This context should be isolated from the rest of the application and should only be submitted back to it when you press Submit or just discarded when you press Cancel. This can be a little tricky to do. It's why a lot of editing forms and modern web apps have simply a done button, say, where you're not providing that same contract of isolation of the edits. And your edits are automatically flowing through to the canonical models in your application. So transactions should also, of course, be durable. Users need to be confident that they're going to persist when you push that Submit button and you expect that change to be saved now and forever. And you expect when you reload your browser that change will still be persisted. So these are pretty hard and fast rules for fault tolerant user experience. These are rules you should not violate or you're really violating your user's trust. Users have a very fixed, hard mental model about a transactional user experience. And if you cross that model, then they'll feel that something is just not right with your application. There's another aspect of fault tolerant user experience. And that is kindler and gentler. It's a forgiving user experience. Applications should try to provide a forgiving user experience for the love of kittens. This is going to make people happy. One aspect of a forgiving user experience is to provide transitional persistence, perhaps meaning to persist data that has not yet been saved but is in the process of being edited. You might have experienced this on GitHub when you're, say, editing or commenting on a pull request and you've entered a comment, you flip over to files changed, the other tab. You think, oh, no, I might have lost that comment. I typed all that out, but you come back and you're delighted that that comment is still there. That's great. And let's see. Another aspect of a forgiving user experience is undo and redo. If you make a mistake, like, say, deleting a few emails that you didn't mean to delete, Gmail is nice enough to provide an undo. Another nice feature that can delight your users is to provide offline support. There are certain types of applications which can really benefit from offline support, in particular applications which are editing, where you're editing data pretty much in isolation, where your data doesn't need to be tied to a response from the server, and there's no reason to block your user from continuing to work with that data when your internet connection goes down. It's an engineering challenge, but if you can solve it, it's a real win for usability. On a related note, a similar feature is to provide an asynchronous interface that's not blocked by the request response cycle with the server, where your user can make changes as quickly as possible. And regardless of the state of your server, those changes can be queued up and synced at your app's convenience. So I've been talking about the user experience that we desire, the fault tolerant user experience. And I'm sure that as developers, you're looking at these different degrees of complexity in the user experience that I'm covering and immediately thinking about engineering those user experiences. Well, Ember is terrific at providing a consistent user experience across your app. As I've shown in the case of, say, bindings between models and templates, everything is mapped to canonical data, and it's a no-brainer to keep your app consistent. Similarly, Ember data also provides that canonical data store and provides for consistency in your data models. And also, through its ability to communicate through adapters with servers of many different kinds, it provides durability. Ember data does require a bit of work to provide atomicity and an isolated user experience. It takes extra code, a bit of customization, to say fork data, provide editing in a separate context, and persist to that data back. Similarly, a lot of the aspects of a forgiving user experience require extra work. I was thinking about these challenges over a year and a half ago at a time when Ember data was less stable and had some engineering challenges, which I'm glad to report many of which have been solved. But when I was thinking about these problems, I thought about the basic assumptions of data storage and the client and the primitives that we use to model that data storage and synchronization with remote sources. And so I started with the basics and with thinking about sources as equal but disparate, containing complex but different shapes of data. And obviously, in order to get these disparate sources to communicate with each other, they'll need common interfaces. And for those common interfaces to work well together, they'll need to normalize data. Data will need to be moved between the interfaces in a normalized fashion that all the sources agree upon. And because I'm looking at modeling a lot of different types of data, I should be a little more specific here. I'm talking about perhaps an app that uses web sockets, might commit to REST, but would also like to provide an offline experience with index DB and have a memory source which contains the canonical data that's presented to the user. So I'm trying to model all of these complex sources in a single application. And in order to tie them together, there's no fixed pattern for exactly how you're going to tie different sources of different types together, so you need to allow for ad hoc connections between them. And the ad hoc pattern that appealed to me most was the event subscriber pattern. Now, of course, with sources that might be local or might be remote, as Ember data's learned, everything has to be promissified if there's a chance that it might be asynchronous. And so if these evented connections could be promise aware, then they could communicate this normalized data across common interfaces. And it's with these primitives that I developed Orbit.js, which is not Ember specific, but it's a standalone JavaScript library for coordinating access to data sources and keeping their content synchronized. Orbit has a couple of primary interfaces. One is requestable and one is transformable. Any source can implement one or both of these. The requestable interface is developer-friendly. It provides find methods and crud methods as well. The transformable interface provides a single method transform which takes an operation. And that operation is JSON patch data. That's a form of normalized data that I arrived at. JSON patch provides an operation, a path, and a value. So there are operations like add, remove, replace, delete. And JSON patch was developed to operate against a JSON document. So internal to each source, you have a JSON document. And these patches can be applied. And the wonderful thing about having normalized data and agreed upon schema is that the patches can also return their inverses, which can then be applied to undo a change. So in order to connect the requestable interfaces with each other or the transformable interfaces with each other, there are different connectors. And these connectors, as I've discussed, are event driven. And the events, say, if you were to take a look at a synchronous event handling, it's pretty straightforward because operations happen serially. But it gets more interesting when you get into asynchronous operations, which we could be talking about with remote sources, say a REST source or a socket. So these events are promise aware. And so the connectors basically translate the events to between sources and sources that want to engage with those events can return a promise. And the event, the originating event, won't be resolved until all the promises that are involved in that transaction are resolved. So that's the async blocking pattern in which sources say, yes, I want to be involved in the resolution of an event. In the async non-blocking pattern, then sources might take a while to resolve, but they're saying, I don't want to hold up the other sources. So they don't return a promise, but they receive the event and they go on and perform their action. So it's with these primitives that the transform and requestable connectors can wire together multiple sources of disparate types, and the normalized data can flow between them. And the promises keep everything in sync, whether it's asynchronous or not. So that's great from a theoretical perspective, but it's not useful without a common library of sources. And so far, Orbit has a few standard ones, a memory source, local storage source, and a JSON API source, which is currently in transition as JSON API nearest 1.0, which is hopefully going to be tomorrow. And so you can believe that this source will be one of the first implementations that's completely compliant. And in order to normalize the data between the sources, they need to agree on a schema. So models, relationships, and keys. Now, since I'm at EmberConf, when I talk about Orbit and Ember, I need to talk about EmberOrbit, which is a separate library that should feel familiar to you if you're an Ember data user. It has a store that encapsulates an Orbit source. And that provides both synchronous and asynchronous methods. The synchronous methods provide direct access to what's in the source with all filter retrieve asynchronous methods to access the requestable interfaces asynchronously. And behind the scenes, you can connect multiple Orbit sources and connectors. And all the data will flow through and back to the store. The model is a representation of a particular record in the store. And the definition of the model informs the schema that's used across Orbit. Just as in Ember, URLs drive application state, the underlying Orbit sources drive the model state in EmberOrbit. So if a socket source is connected to your canonical EmberOrbit store, then the changes that are applied in Orbit will flow through to your models. And your models will automatically reflect those changes. So what are the application patterns that you can use with Orbit and EmberOrbit specifically? Well, you can start by developing your applications with ClientFirst with no concern for other connectors. You can just work with an EmberOrbit store. As your application develops, you can add pluggable sources. You could connect, say, a local storage source and an indexed DB source to provide different capacities for browser-based storage. You could synchronize data between socket source and arrest source, and you could set up different connectors bidirectional or unidirectional between them. And you can provide editing isolation by just simply forking a store, providing edits of any complexity, like a form that's driven by a wizard that has multiple pages. And those edits are done in complete isolation and either applied or discarded. And then that flows back to the canonical store. And last but not least, because all of the changes are deterministic, JSON patch changes, and every source returns its inverse. When its changes are applied to it, those changes can be tracked deterministically, and they can be undone in multiple levels or redone. If you're interested in Orbit, it's getting pretty close to stability. I've been working really hard the last few weeks. I'd hope to get Orbit.js.com up. It's going to be the JSON API work as delayed a few things. But I'm looking to get that up in the next month or so. And I feel like it's getting close to the point where I want to put together the docs and guides to make it a lot more developer friendly. So if you're interested in following the Orbit story, please check out Orbit.js on Twitter or IRC. And even if you're not particularly interested in Orbit, please keep in mind fault tolerance for all of your Ember applications. Thank you very much.