 Hi, yeah, I'm Chris. I wasn't going to introduce myself much today or do an about slide, but if you were watching Natasha's wonderful talk yesterday, I'm this guy. Working with Natasha wasn't the only time that I felt like this guy lately. Preparing for this talk, incidentally, I kept coming up stuck somewhere in the middle of it. So many points to make, so many dots to connect, mazes of webstone rival, just trying to get a really cohesive story somewhere, make it concise. And well, I think I've succeeded in that. I hope when we get there at the end of this, you agree. I felt this talk needed to be about what has changed as much as it needed to be about why it has changed and equally about how that change affects you. I also promised Leah I wouldn't run over into the workshops. So with those many things to cover, but also a time aspect, and I reflected, I decided, you know what I'm looking for today? I am looking for a way to comfort those of us, myself included, a verse to change. This talk I hope presents a vision, a challenge to those that find themselves comfortable with what has been a dream, maybe. These past few years, Ember data has undergone a lot of change. To those only following at a distance, it might even feel as though we've been deprecating ourselves out of existence. I'm not a dad yet, but I am practicing, and that was a bit of self deprecating humor. The truth is that we're evolving. We dare to dream a vision of the stars. Maybe not the stars, but we dare to dream a vision in which Ember data provides superior developer ergonomics, performance, and maintainability that carries companies across decades of product development and evolution. To turn that dream into memes, I mean reality, grab your towel, don't panic, and let's see if we can get tickets to the Ares tour. Over time, we have realized that Ember data contains a number of bad abstractions. These abstractions encourage product teams to build features that have difficulty scaling. Instead of reducing complexity, Ember data has increased it by requiring teams to work around their data framework instead of with their data framework. But as we watched, we realized that Ember data was bringing a lot of benefits to the table. The out of the box experience often meant that new applications could start focusing on feature code immediately. The patterns for working with data were relatively easy to pick up. The abstraction of fetch and normalization meant that even as APIs the application interfaced with were updated or swapped out, client code rarely needed to be rewritten. Standardized practices for fetching and mutating data meant that whether part of an app was a month old or a decade old, you still interacted with data in the same way. And finally, data normalization meant that many products could more easily craft richer experiences over top of custom, rest-like APIs. Even the corkiest APIs could be made to fit. Many teams were deciding to stick with Ember data. So we asked ourselves, what could we do better? We decided we should focus on being a lightweight, reactive data library for all JavaScript applications. We should focus on patterns that scale with the size of the application, the quantity of data handled, and the size of the developer team. We wanted to avoid designs that required an API to behave a certain way or use a specific format. And we wanted to avoid designs that made it difficult to just use the platform. Okay, enough with the fancy slides. Let's get into some details. We've rebuilt Ember data over a new foundation, centered on documents and resources. It's a design that seems obvious in retrospect, but rejects the pure ORM nature that Ember data has had for many years. A resource is a uniquely identifiable piece of data. For instance, a row in a database or an image blob from AWS. As a general rule of thumb, resources are addressable, meaning that given their identity, you could accurately locate it. A resource is probably not, for instance. A JSON blob stored as a column value in a row in a post-grade database. The row itself would be the resource. On this slide, the JSON on the left is a document. The parts we are extracting out of it are resources. A document is a grouping of meta information and resources. Usually a document tightly correlates to the response body of an HTTP request. In fact, in Ember data, we consider the identity of a request to match the identity of the document it produces. The JSON on the right is the same document as the previous slide, but we've extracted the resources and left only identifiers in their place. This allows us to keep the contents of the document fresh across any number of requests, while still being able to recreate the original when needed. This is resource identity, so is this. And most importantly, so is this. In Ember data, we're used to thinking of identity as model instances have an ID. This stopped being true quite a long time ago under the hood. Because not everything has an ID. Not every resource uses the property ID to represent its primary key. So this was a constraint that locked people into specific API formats when we were hoping to become something flexible to match any format. Instead, we've moved to the idea of opaquely identifying data. In this case, the LID is a string generated on the client that given data that should be identified the same way every time will generate the same string every time. So for instance, if your API was a Rusty API and it used entity earn as its primary key, you could implement the configuration method for identity in Ember data until it use entity earn instead of ID. This is document identity. Looks very similar. Typically, with document identity, the LID is going to be the URL used to fetch the document. There is an exception to this we'll see in a moment. With document identity, it's also configurable outside of the exception, which we will get to a moment. You use the same configuration hook if you wanted to configure it. Now I've shown the default implementation here. So in theory, you might not ever have to override the document portion of this identity generation. It's pretty simple. If the request has a URL, then we might have an identity. If the request was a get request and has a URL, great, we use the URL. Otherwise, we don't actually consider it to have an identity. This ensures that we don't accidentally start caching things that we shouldn't, like your post and patch input requests. Document identity is configurable at the source of every request. This is the exception. So let's say that you had the common trait of a query that you needed to run that was too large to fit into a URL string. So you have implemented query as a post request. Well, in this example, this is what we've done. And we've given it a cache key, even though it is a post request, and even though the URL would not actually be exactly the information needed to calculate a stable identity, we've said we know enough context to assign it a cache key, and so we will. And so we've added to cache options the key parameter. This is an entirely optional thing if you are using request, but you can do it at the point you issue any request. So identity can be amended at the point of request. Identity shipped in 313 for resources. For documents it shipped in 412. But why are we talking about it now? Identity is what is allowing us to redesign the library over four core competencies. Requesting data, caching data, presenting data, and mutation workflows. What does that look like? It looks like the same thing, but different. The patterns will feel familiar, but the subtle differences unlock new powers. And importantly, there's an incremental migration path between the old and the new paradigms. Let's take a look at how we request data. Before, we used methods on the store. We said store find record, user one, this was what we would call a find record request. Now, we use store.request, and instead of calling find record on the store, we're using a find record builder. All of the builders do is they produce fetch options. In the simplest case, all you really need to do is pass in a URL, but this will populate some more things for you specific to JSON API in this case, if that's what you wanted. We're shipping builders in 5.2, and it's going to include builders for active record, REST, and JSON API out of the box. Before, we had query, query record, and find all. Today, these are all the same thing. They're all just a query. We no longer need to encourage the idea of all. We feel that was one of the patterns that led to accidental complexity and led apps to really build bad features over time. Starts out very convenient, doesn't last. So from the perspective of encouraging scale, we're encouraging to use primitive that is paginated by default. This doesn't mean that you need your API to be paginated. If you implement a query that just calls slash users and it still returns all of your users, you are not paginated. It's the same request you would have fired off if you had fired off a find all in the past, but it gives you the infrastructure to quickly change to paginated later if that's what you wanted. Request takes any valid fetch option, plus a few extra things to provide additional capabilities, but we don't really want you to have to think about constructing fetch options all the time. That's why the builders are there. It keeps the abstraction that was nice and lets you just focus in on the bits that are actually important to the request you're trying to issue. For migration, adapter and serializer support is retained as a special legacy handler. I mean, we'll get into what handlers are in a moment. Anywhere that you have not yet migrated to using request. This lets you refactor one request location at a time at your convenience. When you request data, this is what you get back. It's, you get a request that you issue through the store, a via-store request resolved to what is called a structured document. Requests that are not issued through the store resolve to whatever the last handler in the chain returns. Structured documents are a little bit like JSON API documents. They expose meta, links and data. Data will either be a single record or an array of records depending on whether the document represents a single resource or an array of resources. If the content is an array and links are present, they can be automatically used for pagination. Collection relationships are just a wrapper around this that keeps track of what pages you've fetched. And since any changes to the URL for filtering, sorting or order result in a different cache key and thus a different collection, this means that relationships as well are not only paginated, but are also easy to have multiple perspectives of on the same page at the same time. So relationships in the past have had this awkward juggle, accessing the property which are an automatic fetch without the ability to provide options with no ability to paginate. While they gain synchronous access, you need to use a different approach altogether. In the new world, relationships only fetch when fetch is called. Are paginated by default and any desired options such as query programs for filtering and sorting can be passed in. Even better, synchronous access to what has been loaded is now to the same set of code paths. So migrating between these paradigms is currently best done by migrating the specific record implementation from model to schema model, which we will talk about in a moment. But for this, so you would migrate on a per resource type. However, there is an even more granular migration path available where we go field by field. And depending on the feedback we get from early adopters, we were likely to provide helpers and decorators to assist you in that if it turns out to be a better migration path. Models are runtime classes that handle presentation and provide a source of schema. In the old world, these ideas were combined into one thing. Models do not have a one-to-one replacement. This is because we found that this pattern was the root of a large amount of the accidental complexity in applications. Instead, we've split the concern into two ideas. The first is schema model, which is a proxy that consumes schema and uses it to enrich information stored in the cache for a given resource into something for presentation. The second is schemas. Schemas themselves exist in two parts. The first is as an optional DSL shown here. Here, we're using a constrained amount of TypeScript to express multiple crucial details. The user class and its properties will be compiled into a JSON schema. That's the second part we'll show later. Additionally, this class will be compiled into multiple TypeScript definitions, one to express the resource in the serialized cache format, one to express it in the hydrated presentation format. And depending on some schema decorators not shown here, one each for when editing or creating a user. This allows us to have the strictest possible TypeScript definition for every scenario in a matching runtime schema from a single source of truth. A huge thing to note here though is that this is optional. Schema model itself consumes a JSON schema format, which can be delivered just in time at any point from any source, including from your API itself. What this does is allow us to offer everyone a great developer experience with strong runtime guarantees, even when your API is far from standard. But wait, there's more. Many APIs have a shared understanding of simple derivations on resources. Sometimes these derivations end up only living in the API, other times only on the client. Often the bookkeeping to keep these fields properly in sync gets a little hairy. But because schema model is just a proxy that consumes schemas, and because those schemas are a concept that can be shared between the app and the API, we're able to introduce a new capability here. So in the past, full name as a derived property would have been implemented as a getter or disinpass a computed property, and have lived on the class instance. In schema, you're allowed to express that subset of derivations that both the API and your app are aware of in a consistent manner. Of note, the schema parser has an ability to understand the custom schema decorators meant for derivation. This allows applications to sugar over their most common derivations if they would like to keep the authoring syntax tercer than is what is shown here, if you would like. But wait, there's more. At some point, you've probably had an attribute that was a simple object or an array. And because schema model is just a proxy that consumes schemas, deep tracking will be built into schemas, schema model, and expressible in the schema DSL. That means that you do not lose dirty tracking for complex attributes in arrays. And you don't need to rework your app to use model fragments or M3 if you have this use case. Only a tiny subset of the capabilities that this unlocks are shown here. You may continue to nest objects as many levels deep as desired. And you may even have relationships, including with managed inverses between fields on deeply nested objects. We also intend to introduce first class support for partial records. And I would note that this is already partially baked in if you put all of that information into the cache key for the request. If this sounds a lot like GraphQL, well, what is GraphQL if not a schema-backed, deeply nested object that requires deep tracking of dirty state and normalized resource cache and occasional management of a graph of pointers between resources? To be clear, the final steps to polish the library for first-class GraphQL support is not in our near-term roadmap, but we wish it were. We've specifically architected to make sure it could be an amazing first-class experience. And finally, I keep mentioning schemas. This is a field schema. Kind can be one of attribute-derived resource, collection, object, or array. Options are pretty much an opaque grab-bag that we will pass in to a transform. Transforms are what is defined by the type. In this case, we're gonna be passing in the options and the value to the enum transform. There's a really critical difference here between models today and schema model. In the models today, when you define a transform on an adder, it runs in between the cache and your API. That means it's a serializer and a deserializer between the format that your API gives you and the format that your data is in in the cache. These do not work that way. These transforms hydrate and dehydrate between the cache and the presentation layer that ensures that the cache is always in the serializable API state at all times. It means that you don't have to do interesting workarounds for things like dates for deep tracking. It means that you don't have to do interesting workarounds for things like, well, this was an enum when it came from the API and then I edited it and don't understand when my transform didn't run because it's not where transforms run, but now they do. Now they run at the front. As we reworked the request story, a key constraint was to ensure that developers could just use the platform. This meant that anything that fetch can do, we can do and probably better. So before, you would have defined an adapter and I've obviously oversimplified this for anyone who's written an adapter. I think this is an old copy of my slides. Give me one second. I'm kidding. There are no adapters. Some behaviors such as setting headers for authorization would be handled by request handlers. But the main thing, building the URL, moves to the point the application issues the request. So there simply isn't a need anymore. In the old days, you would define a serializer. And again, for anyone who's ever had to implement one, this is a massive oversimplification of what a serializer does. In the new world, there are no serializers. In an ideal world, when you do need to normalize data into or out of the cache, you would use a handler. But the best approach would be to have a cache that aligns as closely as possible to the format your API is giving you. What this unlocks is a huge amount of developer productivity. It means that what you see in the network panel is what you see in the cache and the only differences are what is in your schema for what you're seeing in the UI. The mental model and the hoops we've jumped through to try to figure out why this property that is coming from my API is not showing up in my UI will be a thing of the past. Longer term, support for the legacy adapter serializer pattern is maintained via that legacy Compat Handler I mentioned earlier. Even the point that Ember data removes that from the default experience, you can continue to install that handler and use adapters and serializers as you finish your migration. A request manager is just a normal class that has an idea about how to manage a pipeline of requests. It's not specific to Ember and you could use it for managing fetch even without using the rest of Ember data. If you wanted to use just one request manager as a service, you could do something like this. If you wanted to have a request manager and use it with a store service, you could do something like this. The default out of the box experience today, if you were to install the Ember data meta package is that one of these is instantiated for you and it's not a global service. Request manager only manages the request pipeline. It has no idea how to fulfill requests. Everything is left to the handlers which are essentially middlewares. Handlers receive the full request context in a next function. Any handler may decide to fulfill the request or pass it along via next. And if you need to handle the response as well, just await the result of next and you may. You might wonder on this slide, why did I copy the request off of the context into a new object and clone the headers before mutating them and passing it along? It's because the context is immutable. This helps ensure sanity and reason as we work through these middlewares. I pulled this off of the readme for the request package. If you haven't looked at the package readme is further the request package in the store package. They are packed with information about this new pattern. I just thought this was a nice visual of how the request flow now works both when you are using the store and when you're not using the store. Next, we have cache configurability. Your cache and your presentation capabilities are fully configured by you in your application by hooks on the store service. What you don't see today when you use the Ember data meta package is that the store service that it magically exports into your app is doing this wiring for you using all of the legacy patterns. So if you want to go all the way into the future and discard all of the past, what it means is that you drop that package, you install some packages on your own and you do that wiring yourself. It's a little bit manual and we have a solution for that. There's three main hooks here. The first is create cache. The cache is a singleton cache that handles all of the documents and all of the resources. And so any time a request has been made or a resource has been updated, it's that cache that's going to say, well, I know or I don't know what to do with this. So in this example, we're installing a JSON API cache. We intend to eventually offer officially supported caches for things like GraphQL and REST as well. Instantiate record and teardown record are life cycle hooks that tells Ember data, what should I actually create to present a specific resource out of the cache to the UI? So how do I actually know how to take the schema and take the data in the cache and what do I give you for that? So the Instantiate record hook is just as, oh, here's an identifier for which thing in the cache you're supposed to be creating a record for. Here's any create arguments that were passed in if you called store.create record. Please give me an instance of something. Teardown record, similar just return, it just receives that instance. And if there's any teardown that you need to do, then you do it then. So if you wanted to, for instance, implement this using the destroyables API in Ember, you would just make sure that when the teardown record hook is called, your destroyables are also invoked. And you wouldn't want to entangle it with something that had a life cycle there. So what's neat about this is because it's just hooks and because the default implementation is just hooks for both model and schema model, you could have both sets of hooks simultaneously. This gives us a really nice migration path. So here, what we have done is we've maintained our usage of the JSON API cache. We've imported both the older hooks from the model package and the newer hooks from the schema model package. And we have a map somewhere of which things have been migrated so far on per resource basis and we're just switching off of that as to which one we instantiate. Incidentally, this is part of how Ember data scales with you because it means that later if you decide that, hey, I don't want schema model, I want my own thing or Ember data comes up with something even better than schema model. Again, the migration path is something that can be carefully and granularly managed instead of it being an all or nothing rewrite. Finally, mutation and memory management. Both mutation and memory management are managed by forking the cache. Forks allow you to request data that remains aware of the requests in their upstream parent, but whose data and memory are released once you dereference the fork. They are cheap to create using a copy on write approach to any data retrieved from the parent store. Rollback and save does become inherently transactional. Well, you may still be able to rollback smaller sets of changes on a single resource or save just a single resource. You can also now discard or save an entire fork. When you save, the request manager will receive the store and cache instance associated to the request, as well as a list of the changes to the record from which to construct the body of the request. So final note, records from the primary store will become immutable by default. This means that the choice of optimistic or pessimistic UI updates is now yours, instead of all edits being globally observable all the time. So with all of this configurability, how do we ensure a seamless and quick getting started experience? Users that don't have Ember data in their app would get started with an interactive walkthrough with a choice to use the defaults, a preconfigured selection ideal for a specific format like GraphQL or a curated list. The installer not only ensures the right mix of packages get installed into your project, but it handles creating the store and request service and wiring in the proper configuration based on your selections. Users creating a new app with Ember new would have these same options when using interactive mode with the ability to skip Ember data entirely, use defaults or use a preconfigured selection based on a flag in CI mode. What this also means is that we will remove Ember data from the default blueprint, but still leave it in the stock Ember experience. This week, members of the various core teams got together to discuss all manner of project related health. One item I wanted to address was stability. Could we improve stability and upgrade paths for our customers, smooth out some of the unexpected changes even for the customers that have been lagging really far behind? The answer was yes. There's still a few mechanical things to sort out and you can expect an official blog post before we actually start the process, but Ember data will be breaking away from full lockstep with Ember. For now, our majors will continue to stay in sync. Our new miners, when they do happen, will release at the same time as Embers. And when Ember publishes a new LTS, we will always publish a new LTS. But in between, we might release less often. Go slow to go fast. So when we've introduced a really big change, we're able to spend longer addressing any issues that are found before everyone gets left behind.