 First of all, I'd like to thank everyone for coming. You know, I know I'm a backup and you weren't expecting to come to the session, so. It's pretty exciting that we have actually people here. So you know, very much appreciate it. Today we're going to be talking about a concept called remote entities. Standardizing external integrations in Drupal. So I'll just start off with a little bit about me. I'm a Drupal architect for the Autodesk consumer group. I've been a full-time Drupal developer for over five years, done other random web stuff for however long. You can find me on Drupal.org at steel-track. My contrib modules, I have our advanced page expiration and replicate beam. I also just want to kind of talk about Autodesk real quick. It's one of those companies people have never really heard of the name, but they know the products, AutoCAD, Inventor, Maya, I'm sure a lot of you have heard of these. I also want to thank them for allowing me and my co-worker to come out to Drupal.com Barcelona to make this possible. So we're building a CMS platform on Drupal and we're opposed to the really interesting problem. We needed to integrate with a REST API service that contains tens of millions of data assets. Any of those assets can be updated any second and we have millions of users. So one of the things you have to start thinking about when you need to integrate data into Drupal is you need to use some sort of solution. And the criteria we came up with were to, based on the consistency of the data, so that's whether you're dealing with dirty data, bad character encoding, things like that, and also how similar your data objects are to each other with what you're importing. Another consideration is the size of the data set, importing 100 items that's trivial, but importing 100 million items is not trivial any longer, especially when you factor in entity saving like node, field collections, I'm sure, raise a hand, who here has done migration and been really hurt by entity saving, like field collections especially. It's absolutely brutal. I once worked on a project where we were daily importing a five million books from a flat file and it was actually taking more than 24 hours to do the import, but we are contractually obligated to update it every night at midnight. So the other item you have to think about is a maximum tolerance for stale data. For some use cases, for example, like I just mentioned with book data, an author's name on a book is probably not going to change in the next five minutes. However, with our use case, somebody could update an asset in the REST API literally every two seconds. So a lot of common external data integration approaches for Drupal are some sort of data migration process, some sort of just random PHP solution that you just kind of bolt on, and then some sort of client-side solution. The data migration approach is almost always going to be based on the migrate module. And some examples we have here are you have a server-side migration at a standard interval. And that could be, for example, FEEDS module every night at midnight. Another example is you have server-side migration at a non-standard interval. So that could be a content admin uploads a spreadsheet at any arbitrary point in time and that performs a migration. The pros to this are that it's kind of easier to debug edge cases because you're getting a complete data set usually in one run. In addition, it's performance once it's done. During the migration, things can get a little dodgy. But once it's done, the data's in Drupal, you're good to go assuming you know how to write good SQL queries. And then the other thing is it's native in Drupal once it's done. You're probably going to have it stored in a couple different content types, have some NCT referencing going on, pretty straightforward. The big cons with it though are that you're going to have stale data, especially when you get into huge data sets. Doing these kind of monolithic data updates can take a very long time. In addition, monolithic data updates are extremely risky. If you have a data set of 10 million items and you don't catch a bug, you risk having to not only do a new migration, but also having to go in and fix X amount of ruined data sets. Extremely dangerous operation. The other issue is that it can take a very long time, like I mentioned. There's also kind of the non-Drupal PHP approach. So this is where a vendor maybe gives you some sort of PHP library. And then you just kind of stitch it into Drupal wherever it makes sense. You end up creating a bunch of theme functions. Maybe you end up doing some sort of custom views query back end, something like that. And the worst case scenario, and I'm sure no one has ever seen this, is where someone just simply includes a PHP script that's outputting random HTML in a template. So obviously we once again are dealing with stale data. Or I mean, I'm sorry, one of the pros is we have no stale data, right? But the other thing that's great about it is with symphony integration, Drupal 8, and having all these kind of interoperability standards, it's a little less scary to actually just use whatever random PHP library you want to. The big cons though, it can require a lot of glue work, especially depending how tightly you want to integrate it into Drupal. Theme functions, all the form API you have to build out. It can become very, very expensive. Performance is also going to be entirely dependent on your external data source. The third approach, and this is kind of the popular one right now with the whole headless Drupal thing, is a client side approach. So here you're talking about, at its most simple, some sort of jQuery, AJAX solution, AJAX call happens just fills in some stuff on the page. Or you have kind of the embedded AngularJS widget thing, where you're kind of, you invent all these little AngularJS apps and then just kind of embed them randomly wherever it makes sense. Or worst case scenario, Drupal does a header and footer and then your whole body is nothing but an Angular app that pulls from an external service. The pros of this, of course, are there's no stale data and it's extremely performant, right? You have the client side performing HTML rendering, there's no blocking requests, it's all asynchronous. The big cons with this are that you require two theme systems. Not only do you have to manage Drupal templates, but you're also going to have to manage some sort of client side templating. In addition, you've created now a separate layer that lives on top of Drupal and is not actually tightly integrated with Drupal. The page also can be entirely empty on load, depending what you're actually up to. In addition, and for some teams as a matter, for some teams that doesn't, this can require a very wide skill set. You need to have someone who can really knock out some Angular, some JavaScript, can do some PHP, can maybe do some Twig. There aren't many people that have that big of a tool set at their disposal. So to sum up, data migration won't work because the data's too stale. Non-Drupal PHP would probably be too expensive to implement. And client side solutions aren't really going to tightly integrate with Drupal. So we thought about this and we said to ourselves, what if we could have native Drupal integration and no stale data? Do entities have to live locally? So we did some research and we came across a module called the remote entity API. Has anyone heard of this module? A few people? Anybody actually implementing it yet? Yeah, it's crazy experimental. And there's like nothing on the web about how to do it. So we took a look at it and the remote entity API, it basically allows you to, it has its own controller, custom entity controller. And what it does is it gives you the ability to just kind of create some arbitrary connection to some external service. And then it allows you to decide what properties from that external service you want to save locally in some sort of local storage. The pros for this are it's tightly integrated with Drupal. You get the theming, display modes, field API, views, the whole works. The big cons with it are once again, we're saving data locally. So we're going to run into issues of stale data. Performance is going to be entirely dependent on your external data source. And it's just crazy experimental at the moment. But we weren't scared. So we said, you know what? Let's run with this. So our remote entity solution we decided would have to be based on a remote entity API. So at least we had a starting point. No local copies of data whatsoever. Key contrib integrations, including views. And we had to have huge caching strategy potential. Because if we don't, then the whole thing is going to be a bust. Because we're essentially going to be using a server to make calls, which are most of the time going to be blocking. So the steps to implement in this list is long, kind of on purpose. Create a custom entity controller. Define how to connect to the external service. Figure out how to query the external service. Define your custom entities. Define all the entity properties. Refine the Drupal integration points. And then actually implement a caching strategy. It's just that easy. Actually, it really sucks. But this is what you get. So what we have here is just a simple landing page. We use paragraphs to build out our pages. If you haven't heard of it, I highly recommend you check it out. We've already integrated our entities just using properties. We're going to create a view here that's going to pull in a gallery of penguins. The thing to keep in mind here is that we have not written a custom query backend. Usually when you see this, there's a custom views query backend. You're getting all of this because you have defined an entity that loads from a remote service. So you can see here, because it's an entity, I actually have the ability for view modes, full content, teaser, embedded. You can then see that pagination will work by setting a limit and offset on the external service. You can go ahead and enable all the native stuff, like use Ajax. So if you want Ajax pagination. And then if you take a look here, all these filters that are provided to us are actually because they're mapped on our entity properties. So I just chose keyword, which is going to be an arbitrary search and I put in the value of penguin. I am then also, because I'm going to assume this page is for children, I'm going to choose my include mature and set it to false. So I get no mature assets. I am then also going to sort from newest to oldest. So I'm going to sort descending. Once again, I have a property defined in my entity that is then mapped to a sorting handler, automatically in views. So now I'm going to go ahead and add this view to the page. I didn't show you the preview on there because we don't have it themed properly for the admin theme. I'm going to go ahead. I created a block. So I'm just going to go ahead and use a block reference field. For those of you who don't know, it just references a block. And there we go. We have our penguin gallery block. We're going to go ahead and hit save. And there we have it. We have a fully themed view because we themed all of the displays, all of the standard entity dash display theme suggestions work out of the box just as you would expect. Pagination also works no problem. Now what I'm going to show you is there's a module called entity reference view widget, which allows you to use a view to decide what you want to pull into an entity reference field. Because we have defined an entity reference or an entity ID, the entity reference field will actually work correctly because we will be storing a remote ID in the system. So we're going to go ahead and click add items. And what you're going to see here, this table that pops up, is actually going to be a view. But we've decided to use fields instead of a rendered entity. So I go ahead and search penguin. That looks cute. Kind of. That looks fun, I guess. And that's kind of cool. So what we have here is explicit selection, not just implicit selection. I can also drag and drop just like you would selecting nodes or users with an entity reference field. Going to go ahead and hit Save and check it out. All of this is pulled from an external data source. Nothing is stored in Drupal except the parent node that this lives on. So I think this is one of the coolest things I've seen in a long time. I think it could really revolutionize how we think about data integration in Drupal. If we go back and think about Dries' keynote where he said, headless Drupal poses a risk because you essentially chop off all of the page building tools and other great things that we've come to depend on and use in Drupal. When you go to something like a big pipe strategy where you can create placeholders for any slow queries, such as external queries, then suddenly you don't have a data migration to worry about and then you can solve all your performance problems using some sort of big pipe solution available in Drupal 8. So just to kind of get, we're going to get kind of nerdy here. I hope that's okay with everyone. You know, if you get a little lost, I'm sorry about that. It's kind of a complex solution. But I'll just start really quickly. Entities, does everyone know what an entity is? I'm guessing maybe. They're a standard way to handle CRUD operations. So nodes, users, taxonomy terms, field collections, beans, those are all entities. And what's important about them is that they all share a PHP interface, are fieldable, allow for display modes and they can use entity field queries. This isn't 100% accurate, but for the sake of this, let's just say it is. So when you define a custom entity, you're going to use hook entity info. And so hook entity info uses the example here of a node. And so you're going to set things like, for example, I mean, everyone here who's done a custom module, you've seen this huge structure to erase stuff a million times. We label it as a node. We say what controller it's going to use that lets it know how it's going to perform its CRUD operations. So if you look at the node controller, you're going to see things like being aware of published and timestamps, things like that. And then you're going to see here that actually defines a base table. And the base table is node. When you define a custom remote entity, you can do something kind of bizarre. You can actually set a base table as null. Mind blown, right? Because the idea is that your controller is going to perform its CRUD operations on an external service. Therefore, you have no need for a base table. This is the major difference between remote entity API's default implementation because it actually, it has some sort of local storage, which is more like a caching mechanism, to be honest. You can also see here we define our view modes, full teasers are embedded, and we define our controller. We can define whether it's allowed to be static cached or entity cached. You have to be a little careful with entity cached because you don't necessarily know if the asset has been edited on a different application, like an iPad app. You can solve that, especially if the service can notify you of changes. And then what we do is every single endpoint we have, so if we have a user endpoint or an asset endpoint, actually becomes a custom entity. And so first we have all that shared standard info that I just showed you. And then we define all of the things specific to this individual asset. And what's kind of, you can see here at entity keys, I have ID equals asset ID. And remote entity keys, remote ID equals asset ID. This is essentially how you're able to map a external ID to one recognized by Drupal. One other thing I'll note is that I actually, if you look, I've added a property here of parents. If you look in the spec for hook entity info, you will not see this. But what's cool about it is it will automatically save any custom properties that you add. This gives you the ability to, based on your use case, add additional info unique to your information, which allows you, rather than having to code in edge case by edge case by edge case, you can actually just set flags in each of your assets such as whether it can be deleted. And then your cred controller can recognize that flag. So we set a parent so that we're able to understand that all these entities, while they're separately defined, actually all share some sort of common information, whatever that might be. So here you can see we set a remote delete key and we have remote delete equals true. That's not true for all of our endpoints. But one of the most important things is we can actually define a remote base table. This is the endpoint where you get your data from. So, and you can see here, I have a token, bracket ID. Keep that in mind, we're gonna talk about that later. So some tips for defining a remote entity. You're gonna declare the IDs from your external service here. And you're gonna find the endpoint that points to your entity data, the specific objects that you're working with. Also, it's really important if you can to architect your solution to use one controller for your entire external service. Because let's be honest, it doesn't matter how many endpoints a REST API has, it's probably gonna perform its put and post and get operations the exact same way across all of them. Finally, you can use custom properties so that edge cases can be handled in a centralized way. So the next thing you're gonna need is a controller class. So controller class is basically that handles CRUD operations. This is a lot of code, but you really don't need to care about it because all we're doing is we're just saying this save method is we're not actually saving anything locally. So all we do is just make sure our fields that we might have attached because you can use field API on remote entities works. And then we call a remote save method. The remote save method then in turn has whatever logic you need to make a connection and perform an operation. Similarly, we have delete method. Once again, you can see here's that property where we actually check to see if remote delete is even allowed. Then in the case that it is, we then call our remote delete method and allow it to happen. Here's what's amazing. Entity load and entity save can actually work on remote data. So you can see here in this example that I just have my asset type or my entity type pass an ID and it'll just automatically do the work for me. Similarly, I can take a already formed object. I have my type and then I can perform an entity save on that. Similar to node save and node load, you could also have some sort of wrapper function that allows you to have any extra business logic you might need. What this really starts speaking to that I think really makes us powerful. I mean the view stuff is cool and sexy, but the real power in this is actually the developer experience because rather than having to just remember some arbitrary methods from a class you have in a library, you can actually begin to use the exact same Drupal knowledge that you've already been leveraging for nodes, users, you name it. You can also use entity view. So entity view, you can pass it what display you want. Once again, displays will work just as expected. So the next thing you're gonna create is the connection class. This class is actually fairly straightforward, but this is the one where you figure out how do I pass my credentials? How do I actually pass a request to the service? So here you can see the remote NCPI module uses a client's module and they have a hook and it just allows you to do things like say what is your endpoint? What kind of configuration do you have? It's really straightforward. Anybody who's done integration before, this is easy. You can then see there's another hook that's required and this actually makes a connection between how you, which connection you're going to make for each entity. This is where once again our get children entities method is really handy because rather than having to have a giant array of everything, we can just say all of these entities actually connect in the exact same way because they're all children of one parent. And then it has some methods in it like remote entity load, remote entity delete and really all these do are just proxy off to some kind of query class that we're gonna look at in a minute. The one thing I do wanna mention though is you do have the ability to differentiate between single and multiple loads. So if your REST API has the ability to pass multiple asset IDs at one time rather than having to make individual requests, you can actually code that in to get a significant performance increase. I'm not even gonna show you our make request method. First, because I'm a little embarrassed that needs some refactoring. And second, it's so specific. I mean, we're using guzzle, we're passing all sorts of headers, we have sessions. I mean, it's just gonna be unique to whatever your use case is. So then we come to what I think is the magic class and this is the query class. The query class is going to define how your data is prepared and then sent to the external service. Basically, it's gonna convert all Drupal queries into external API queries. So here, similar to, who here show of hands is familiar with EFQ, entity field query? Bunch of people. So entity field query has a lot of methods like property condition. And basically you just pass it like equals, value, and then whatever you're trying to mess with. Similarly, we do the exact same thing here. So there's really nothing not exciting about it. However, you can see here, for example, if one of our properties is prefixed with a special key of query, then it actually lets us know that it's something that needs to be handled in a special way by our REST API. So this gives you a lot of flexibility because you're actually gonna be able to kind of start to mold your queries in a way that you need. And then there's gonna be the execute method. Our execute method is actually a couple hundred lines of code. So this part's a little meaningless, but you can see we're performing, this is executing a select. So we actually pass a method of get. We pass it a query string we formatted. And then we can set any sort of options. And then we go to that make request method I mentioned earlier from the connection, parse the response. And then finally we can return our entities. So do you guys remember I mentioned the ID in the remote base table? One of the things that our select execute method does is it actually checks to see if the path of the entity you're querying has an ID in it and then we'll perform a replacement of that ID. So that's how your entity load actually works is because when you run entity load, it's basically gonna say, oh, I'm gonna perform a select query at asset slash 12. And then when you get to your query class, it's gonna know, oh, now I convert this into a URL string where I'm gonna actually go to asset slash 12. Here you can see we're iterating over all of our property conditions. And then we're adding it to some sort of query array. And then we break down that array using Drupal HTT build query to produce a query string. So we're actually converting an object now into just some sort of arbitrary query string based on whatever business logic that we need. So in kind of layman's terms, this is not really syntactically correct, but hopefully this gets the idea across that you're gonna turn an object into a REST API request. You're simply just gonna perform some sort of magic conversion. And actually, when we perform put and post, when you perform an entity save, you pass it an entity object. And we actually have a recursive function that will take an entity and then flatten the entire thing, all the properties into one query string. We then pass that query string off to our make request method in our connection class and then get some sort of result back. This is all kind of based on this idea that you need to declare the properties you're getting from your external service in Drupal. So it has some sort of understanding of how to handle them. NST API has had this built in for a while and you can see that they've actually backported this functionality to like the node module and user module. So here's an example of some of ours. So we have asset name, which you can see is of type text. And that we actually, just like we could with hook NST info, we can pass custom properties that you will not find in the spec for NST API, but that will be saved correctly and you can use in your business logic. So for example, asset name is writable and it's also renderable, meaning that it can be shown to the end user. But if we take a look at version, version is not renderable to the end user and it is of type decimal. You can actually start getting really fancy if you take a look at date submitted, you can see it's of type date and then we're actually getting it back in some sort of ISO format. If we actually have the ability to add what's called a getter callback that will perform a conversion from an ISO timestamp to a UNIX timestamp. Who here is familiar with NST metadata wrappers? These work too. So if I go ahead and call asset ID value, I'm gonna get an integer back. If I call asset name value, it's gonna automatically sanitize the string. If I call formatted date, all I have to do is get value and the getter callback will be run automatically. Once again, you're getting a much improved developer experience where once again, you're not having to learn a separate library, you're in fact using all the tools you already know. So as far as our views integration, there's a hook hook views data ulter that allows you to kind of play with how views use your schema. In this case, our schema is actually our NST properties that we've defined. So here's just a couple of examples where we're actually using that property of renderable, for example, to define whether views is allowed to show that in its field select list. In addition, we have filterable. Some of our things that you can see, such as thumbnail URL, are something you can see, but you cannot filter by thumbnail URL. So you don't want that property to show up in that list in views. NST API automatically will throw any property you define into views. So just using this hook, we're actually able to improve the user experience with any of our content editors who are using views by selectively choosing which properties are valid for which operation and views. So to sum up, hook NST info, just like that, custom properties can be used to provide information for your implementation. This is the key to integrations with views, displays, NST API, such as metadata wrappers, and whatever other modules you're going to need to integrate. As far as Drupal module integration, it's tough out there. You're gonna be patching. I'm not gonna lie. The reason is that most, in Drupal 7, most modules are gonna check to see if a base table is set and then pull in that information because it's gonna assume some sort of local query. Views, entity field query module, which was originally created to work with MongoDB backend, actually provides a solution for integration with views. So if you take a look at this code, you can see what they've done here is say, if module exists for remote entity and if I'm using a remote entity API controller on the entity that I'm trying to use in views, then go ahead and allow it to actually work. Because then, in turn, what it's going to do is it's gonna find out what is your query class that's gonna convert the SQL that views creates into a REST API query on your external service. And there's a magic method here I didn't show you from our query class that this module depends on and this build from EFQ. And it literally is going to take a entity field query object and convert it into a query object that your class can successfully consume. Similarly with entity reference view widget that we showed you earlier, I actually updated it like a couple weeks ago and suddenly I didn't get checkboxes anymore and I'm like, what is going on? And I took a look and then I completely refactored how they actually built their entity information. And so you can see right here, they say for each entity, check if it's got a base table. Well immediately there, we're out of luck. So what we actually had to patch in was the exact same similar line where we check to see if remote entity is around and then check to see if the entity is actually implementing remote entity module. So I know a lot of you are probably the number one question you have is performance, right? It's the number one reason I don't like this solution because you're essentially dealing with blocking requests if you're making server side external calls and that's a potentially dangerous situation. I won't lie. But I think what you have, the way you have to look at it is it's just like a bad database server, right? Slow queries, poor network, inadequate sizing of your database server or your REST API throughput, it's all the exact same problems you're already dealing with. They're just gonna be exasperated. So some thoughts I had on this, static caching is critical. You don't wanna make the same request twice in one process, right? I mean that would just kill you. Entity caching, like I mentioned earlier, is possible. It's entirely possible. The only problem is if you're the only client of an external service, then you may not have to worry about this so much, but in our case, we have literally dozens of clients of our external REST API service. And so we can't assume that if someone saves an asset on one of our websites in Drupal, that it's actually an appropriate time to invalidate that cache because they could be in some sort of iPad app and updating the thing every two seconds. However, our external service can actually send us, push to us any information that's happening, IDs of assets, what operations were performed on them, operations on users. So we can then actually consume that information and then correctly invalidate our caches. Where this starts getting really cool, I don't know if any of you have been to like the MakeDrupalFly or CDN stuff this week, but they've been talking a lot about this idea of like render cache and big pipe, and a lot of it is possible because of cache tags, for example. So if we get a search result from our external API and we get 10 assets back, we can actually cache that result and tag it with the 10 asset IDs that were contained in that. Then if we find out that one of those IDs has been updated, we can invalidate that search result, along with any other cached items that are tagged with the same ID. Similarly, placeholders could be invaluable for this. You're gonna have some stuff that's maybe not that big of a deal. For example, with us, search queries on our external service are cached and come back extremely quick from the service. However, if we need to do something like load all of the children that a parent moderates, that can be an extremely expensive call for their service, which in turn makes an extremely expensive call for our service. Using something like placeholders, you'll be able to offload that to the front end so that you have now a non-blocking request. One other thing I wanna mention is that we actually are still using our Drupal user in session, even though we actually have a user in session with the external service. The reason for that is we felt that calling a user on every single page load would create a just kind of baseline overhead that was unacceptable. So instead, what we've done is we rely on our Drupal user, map an ID in the user table to the external service and try and only call it whenever necessary for that page, like user profile. Ideas for improvement. Documentation, oh man, there is like no documentation on this out there. And if you look at like the issue queues, they are just very depressing. Like you'd have people who are like, I tried it and got nowhere, can you help me? And then they're like, no. So there is a really great blog article that I'll share the link with you in a bit, but I think that's certainly a really great place to start with this. Second, we really need to standardize the hacks. I mean, I'll be totally honest. Adding custom NST properties just because they happen to save in the database is not really like a great way to be handling our data model. It works great, but I think there is a great chance here to maybe standardize a lot of that, maybe even get it into NST API. We just have to figure out the use cases actually make sense in that sort of situation. Similarly, I think we need to standardize the approach. I mean, we ended up building three very complex classes, our connection class, our controller, and our query class, basically from scratch. It works great now, but it would be amazing if we could figure out a way to abstract this well enough where it doesn't matter if using a REST API or SOAP or whatever, if you're interested in a solution like this, you're gonna have a very clear set of interfaces to work with. Then there's some kind of cool stuff we could play around with such as automatic form API creation. If we're already declaring our properties and what types of data they are, there's no reason that we couldn't automatically create entity edit forms. Some thoughts on Drupal 8. Entities now store all data in fields. This property in fields juggling nonsense is gone. This is a huge opportunity for this external integration solution because Field API, to be honest, is a million times better than entity properties. Entity properties are super basic and you get absolutely no theming with them. You basically just get a raw value. And conceptually, I think it would actually be quite similar. You're gonna have a custom storage backend of some sort which implies you have to know how to connect to the thing, you have to know how to query it. MySQL's no different. Custom entity controller, we're gonna have to create some sort of controller in which you can assume no base table or at least abstract it in such a way where we can, like I was showing earlier, distinguish between a local save and a remote save. And then third, all your remote data properties are gonna have to be defined as fields. I don't know how hard that would be though because I'm not gonna lie to you, our asset data objects are so large that it's actually 1,000 lines of arrays to describe a complete asset data object for us. 1,000 lines. And for Tyler, who I'm sure is gonna be watching this later, I am so sorry you have to do that. So some Drupal 8 resources, there's an incredible session yesterday on entity storage at Drupal 8 way. If you didn't see it, I recommend you cancel all your plans and you go and watch it. It really speaks to how we might approach the solution. More dove into what's gonna happen with your SQL tables and things like that in different instances. But just the kind of thought behind it, I think you're gonna find would be very illuminating for how we could move forward in Drupal 8. There's also already an external entities module. However, it seems to just become an example module at the moment. So I'm sure there's maybe some sort of collaboration that could happen there to create some sort of proper API module. As far as Drupal 7 resources, there's an amazing blog article by, I think it's Colin Schwartz. If he's here, hello, probably not. But anyway, this is the way I got started in figuring this out. And then also you can check out the remote entity API module. We actually have an issue in the issue queue I reached out to the maintainer where someone was like, hey, do we really have to save stuff locally? And he said, well, yeah, but if you want to submit a controller, let me know. And so I pinged him so we could maybe figure out something for that. And then we have it. So questions? I totally agree with you about lack of documentation. About three years ago, I started probably with the same article. And I had to abandon it because my remote ID was a string and not an integer. But I saw in your examples that you were doing something with remote ID that it didn't have to be the same ID in Drupal. Is it possible to do it with a non-integer ID? So in Drupal 8, it seems like you're gonna be able to use any arbitrary ID you want, which could also include a unique URL. In Drupal 7, you are gonna have to use an integer ID so you'd have to figure out some sort of mapping solution. Yeah, okay. Yeah, thanks. No problem. First of all, great presentation. It was really interesting. Thank you. But what I was wondering in your solution, you have an external service that's also being changed by, I don't know, some application on whatever. How do you push the information back to Drupal to signal that it has been changed for some reason, especially when using the new cache tag system in the future? There's probably an issue. Yeah, that's a great question. And there are actually kind of two things that I wanna touch on. The first is we are lucky enough that our service provider, we can tell them an endpoint and then they'll just send us a flood of log data. And so we can actually know, hey, this asset or user got updated and then consume that in some sort of way. That's really the only way you're gonna be able to do it is they have to let you know because they are the source of truth. The other thing I should mention is we have already run into trouble where they have actually altered properties on data objects in their API, which has led to some issues. And so far, the only way I have to solve that is I have breakfast with the guy who does the service every Thursday morning. Thank you, no problem. Any other questions? Okay, cool. Don't forget about the sprint tomorrow, of course. And then if you like the session, please go on and say you did. This was my first Drupal con session. And if I get good feedback, maybe they'll let me do more, but actually let me know ahead of time. So, thank you, appreciate it.