 Okay, I guess we can start now So welcome everybody this is entity storage the droop a late way and I am Francesco Placilla plush on dildo Actually in this session. We will see a Couple of a code example, but we will also see I will talk about lots of theory So don't scare and be scared about that. You don't need to understand every single slide. You will see there's a blog post That's basically Providing all the information we'll be talking about here on the Drupal watch dogs website And then we'll have also the usual question and answer Moment so if you have any doubt feel free to ask or feel free to interrupt if you need clarifications Just one single question Anyone here not familiar with the concepts of entity type definition or field definition Great Then we'll have fun So a note about me I'm Francesco Placilla plush on dildo as I said I'm currently senior performance engineer at Taekwondo something and I've been working with Drupal since 2006 I'm the official maintainer of the core language system and the core language content and translation modules and Actually, I'm also the unofficial maintainer of the Entity storage form and translation subsystems You can find me on Twitter on the plush underscore underscore handle so a brief outline of the session we will see comparison quick comparison between the status of Drupal 7 versus the status of 28 Then we'll have a look to the recommended ways to deal with entity data in the 8 We'll have a brief Recap of the theory behind the entity type and field definitions for those few that are not familiar with that and then we will dive into how storage schema is Handled in the 8th and a deeper dive into the core sequel Implementation we have in core and then the fun stuff there. We will see some code and we will see it live working So Drupal 7 versus Drupal 8 in Drupal 7 we used to have What's called the swappable storage just for fields, which means that basically you could assign Dedicated storage back end To each field attached to any entity type This meant that every field could live in a separate storage back end of such a no-sequel storage or for instance remote storage you name it and these had some drawback actually because the possibility to Configure different the cans for different fields attached to the same entity type implies you may run in trouble When you need to query across those two fields with when you need conditions that Actually apply to those two fields because you would end up querying across different storage backends And that's not exactly viable so This was also Related to another problem that we used to have in Drupal 7 that these all these fancy stuff could be done only for fields that is data that was stored through Fields defined by the field module so any anything stored in properties entity properties like for instance the title is were out of luck were forced to be stored in The re was regular SQL storage Indicate The situation changed quite a lot. We switch from field based storage to entity based storage, which means that all fields on an entity type share the same storage and So and the whole storage is wappable, which means that the whole entity Can be stored in a store in a storage back end and that the storage back end can change and the Note that we should make is that actually base fields are also supported Which means that for instance note title our stores are along with the regular fields we were used to like I don't know feel body and This means that actually and you can swap the entire storage of the entity and as we seen before We can store an entire node into mongo storage for instance These makes entity query way easier because we have a single storage back end to target and Has a consequence we no longer allow field fields to be shared across entity types So you cannot longer have field. I don't know field tags field that applies to a node type and They to the I don't know user entity type for instance These will use to cause a quite some problem in triple seven because for instance we have permissions that in theory allows you to Configure the same field even if you wouldn't be able maybe to configure the same field on on the user And instead you are allowed to configure it on nodes So it it kinds of create some problems We remove that feature and actually we left the possibility to have fields named the same way and Attach it to different entity types, but these are actually seen by the system as different fields On the other hand, we are still able to I don't know provide the same theming for those because the name is the same So now let's talk a bit about The recommended ways to deal with the swappable storage So swappable swappable storage means that We cannot assume that we will be we will always deal with sequel anymore Swappable beckons actually required different approaches Depending on whether we are dealing with contrived code or custom code for country code we should never assume a sequel storage because We have no control on what storage are configured for our entity types so we should try to leverage the entity crowd API for each Every time we are accessing entity data and If we really cannot help Assuming a storage we should do that in a way that allows us to write code That targets storage different storages. I mean what I mean views for instance Implements a query back end that targets sequel but has an API that allows to write another query back end that target Mongo for instance or Uses the entity query API that we'll see in a moment. So if you need to target a specific Storage we can please do Architect your code in a way that allows you to provide also alternative to support alternative Storages Is that custom code may may make assumption because Usually custom code perfectly knows the environment it will be deployed in so actually Is allowed to deal with specifically deal with sequel if it needs to but It should not bypass the entity API, which means it should not query Partial entity data it should always actually do what the entity query API does which is retrieving the Ideas of the entities that needs to be handled and load them Loading is the only recommended way to deal with entity data Because if you do don't do that it will run in trouble and we will see them in a moment. So What's the entity query API the entity query API is that the successor of the entity field query API We we used to have in Drupal 7 it's it has a much more Easier and streamlined syntax which is really close now to the dbt and gene one So writing queries write entity queries will look very familiar to those who are already used to use dbt and G It as in the seven it leverages swappable query beckons that are Actually related tied to the storage beckons that are configured for a specific entity type So actually you can plug as many query beckons as you need to support as many query beckons You are using in your site and the the syntax the query will remain the same A Very powerful feature that the Drupal 8 entity query API has with respect to the 7 is the ability to express relationships between entity types, which means that we can basically Say that we for instance want the Impose a condition on the status of the author of a node Okay, we are crossing the boundaries of two entity types and as long as those are stored In the same see a storage beckons. We will be able to query across them and Obviously in sequel this translates to a join but in other storage beckons We can have whatever we need to express the same relationship Another cool feature is that we now support aggregated queries So that gives us a very wide range of queries We are able to express just with entity query Which means we have portable code that can work on any storage beckons All of these is very powerful Obviously, we have not the degree of expressiveness that sequel has so if you still need to Write the sequel query you're allowed to do so But please please don't load partial data. Don't write partial data Use your queries to retrieve entity identifiers and load them and then deal with them and then save them again Because if you don't do that, you're basically bypassing the entity API Which means that you may run into unexpected behaviors because all the hooks that are involved in the load and store Processes are bypassed and so every module that's assuming your code will always Will your entities will always run through that hooks implementations will be basically bypassed too And then you might have caching validation issues because Now we have entity cache baked into core and so if you write data Directly on the storage you are bypassing that and you will run into caching issues so Do whatever you need to do but do it the right way And if you really need to write your own Sequel specific code, please at least wrap it into a service so that can be swapped out and if needed An alternative implementation supporting different storage beckons can be provided so now a brief recap of Entity type definition and fill definition concepts An entity type definition is a way to inform the system that we have actually a new entity type It's a plug-in Not more. No less a plug-in definition So if you are not very familiar with that concept, there was a nice introductory section by eclipse as you see yesterday unfortunately, so you can have a look to the video and These plugins This is sorry entity type definition allowed to specify in core mainly two different entity types We will focus on the first one the content entity types Because those are the ones that are actually fieldable Instead we have also configuration entity types that we won't see here. Those are for instance node types or views They share the same basic API The content entity have but actually don't deal with fields. They have plain properties and Although even these properties share a tiny bit of API with fields, but they are not full-fledged fields so the main Properties we have in this definition the entity type definition It's the end handler sections that allow to define two handlers that are very important are critical To the entity storage API which are the storage handler which is what in this oven used to be the controller and This is in In charge of performing all the crowd operations not only the load operation we had in this seven And then we have an optional storage schema handler that is in charge to handle The actual storage schema surprise So this is not required you need to specify it only if the storage beckons you're dealing with actually require Schema if they have no such concept you can skip it The two many the two most important properties of the entity type definition are Revisionability and translatability because they actually determine which data we will need to store and how it will be stored So it basically affect have a great impact on the final schema Instead the entity field API which actually is the the API that Generalize the concept of the the seven field API and relies on field definitions and Actually provided a grid leap ahead with respect to the the seven because actually now every piece of code The answer is every piece of data that's attached to an entity is actually a field so even a no title and no type and not identifier a common parent ID or stuff everything is a field basically which means that as I said before everything can be swapped everything can integrate for flawlessly with views rest or whatever else a Feature that properly integrates with the entity L field API What we still have a distinction which is More or less maps to the properties and fields the fields the distinction we had in the seven, but it's actually Slightly different. We have base fields Which are the ones more or less the ones we were used to consider properties in the seven like the node ID That are shared across all the bundles of an entity type instead. We have bundle fields that are Attached or may be attached to only certain bundles and this is the case of right usually the case of field that Created through the field UI. So a field body is shared Across all entity types usually but can removed a field image is attached only to certain bundles So these are more way more flexible Let's say and are not required because there is no business logic built upon them Specifically, how we do we tell the system we have field definitions The very basic field definitions that once that came that come natively with the entity type For instance the node type. No, no, no that identifier again are provided by the class Defining the entity type itself. So node the node class has a method providing the definition for the fields it uses We have these two hooks hook entity base field info and hook entity bundle him for to define additional field definitions and We have the corresponding alter hooks Base field additional base field definitions are usually defined in core in code. Sorry Because typically those are used to implement additional business logic An example may be the scheduler module that needs additional fields to determine whether scheduling is enabled No, sorry Yes, scheduling is enabled for a specific node. Let's say and Yes, that's a case where all the code is written assuming the field is there and so the definition is provided in code instead and bundle fields as I said are Optional are way more dynamic and usually are defined through the field UI. Although Since we have a hook and any module can define additional bundle fields And those typically in core live in configuration because the field module Field UI module and the field module share the same configuration who write that configuration and then the field module implement that but the bundle and the hook entity bundle field info hook and Basically define provides the definition base on the configuration. It has We have another concept. You may not familiar with which is field storage definitions These roughly maps to the distinction we had in the seven between field and in Instances what we've seen so far our instances to I mean field definitions are roughly In what we call the instances in the seven and instead what we roughly called field In the seven are field storage definitions, which actually are the collection of properties required to store surprise the field and These information is shared among all bundles if you are talking about a bundle field and Actually are part of the definition if we're talking about the base field because actually base field is shared across all bundles So it's actually at the same time both a field definition and the field storage definition These storage definition are what actually are actually used to Determine the storage schema Which is the big news or one of the big news that we have in the entity storage API, which means that The storage now completely automatically takes care of creating removing Updating the storage schema without the developer needing to do anything about that. That's completely automated and is Completely based on the definitions we have seen so far entity type definition and filter definition Provide those and the system will figure the rest out for you So, how does it work as we've seen with through the entity type definition We define a storage schema handler, which is nothing more a class. That's responsible to translate the entity type and fill the finish definition to Schema definition in the typical case we have in core in a schema API definition This schema is Automatically generated when the module defining the entity type is installed and is automatically dropped When the module is uninstalled The same is true also for modules providing additional fields So if your scheduler module is adding a base field to the node module and you Install it the the field column will be created automatically on the Shared on the table of the of the node if we are in core and we are dealing with the SQL storage actually Let's have a look to how the core SQL storage looks like In core we have generated tables for both base and bundle fields all the single Cardinality base fields are stored in shared tables Which means that there are a column in a table and all of those live in the same table That's why they are called shared instead for bundle fields We have dedicated tables that the usual field tables we are used to have in the same So but this concept is extended also to Multiple cardinality base fields so even fields that are shared Across all bundles our story and dedicated tables if they are multiple and we will see that in a moment the core SQL storage handler supports for different table layouts depending on the properties We've seen before revision ability to transcend ability So let's have a quick look to how these four table layouts look like And let me know whether you can hear me because now have to turn my head a bit So the verdicts the first example we have is the entity test Okay, a simple entity type that is not Translatable nor a visionable as you can see all the base fields are stored in a single table our columns in a single table and then we have and Custom sorry configurable field a bundle field. That's in its own Dedicated table and you as you can see the the schema is pretty much the same as we used to have in the seven The only column that it's not there anymore is the entity type column because now fields are tied to an entity type So we don't need to specify that and as you can see and the naming scheme has changed a bit We will we now have a prefix Of the entity type name. So actually it's easier to see all the fields that are attached to a specific entity type We now have a multilingual entity type and This is the base table the base table only has just a few very basic non-translatable values namely ID UID bundle and LAN code and then we have The sorry the data table that actually store the field data and Does it by a by language? So you have multiple language version of the same data We have revisionable entity types that instead have a base table that stores all the field data and revision table that stores The revisions of that data and then we have the usual Dedicated tables and in this case we have also the revision table Which is not there if the entity type is not revisionable as you can see here and here And then we finally we have the most complex entity type, which is for instance the same Table layout we have four nodes that these are both Translatable and revisionable and they have four tables the base table that again has only Few columns that are very specific and not translatable then we have A field data table that holds actually all the field data and this is translatable And then we have the revision table that holds only some few Information about the revisionable field their visions. Sorry, and then we have the Sorry, here it is the field revision table that holds all the revision of actually all the field data And these are our four table layouts Actually as you can imagine these are quite complex And so it's really nice that the system takes care Automatically of all of that and you don't have to remember these details because it will work automatically and the system will do it for you so next Yeah, what does this mean from a developer point of view this means that? We can no longer assume a table layout besides we can no longer assume storage backend and So if you are right in contrib code as I said you should just use the entity query API and the sequel backend the sequel Query backend of the entity query API knows about this stuff and we'll figure it out for you You don't need to worry about that just specify your conditions as you would have and you're fine if instead need to write sequel specific code that still cannot make assumptions on sequel backend You have this new table mapping API that it's a very simple API that Allows to describe Which tables are used and what fields are stored in this table and it can be used to write Dynamic queries that don't need to make up such assumptions on the sequel storage It's I think or it's mainly used by the views By the views module to implement its sequel backend in a way that supports all default table layouts. We just saw and The goal of these table mapping API is to be generic enough to describe any table layout But we didn't really got so far at the moment We just have a default table mapping implementation that assumes one of the four table layouts We just saw but it should be enough To work with all of these if you don't have to create your all your crazy table layouts And if you need to do well, you are welcome to join I seen implement in the rest Okay, I told you I mentioned that actually It's possible to update the schema said from creating and dropping it at modern installation uninstallation. How does this work? Again, the system is able to update in its the schema Completely automatically you just need to tell it to do so And you do that through the entity definition update manager, which is a service that's available also on the Drupal class and Allows you to tell the system that you Performed a change in the entity type or field if you need field storage definitions And so the schema needs to be updated to reflect those changes One important one important thing to say is that the system won't proceed if the change you are Specifying will imply a data migration. This is not supported as Supported it was not supported in the seven if you tried to change Some properties of the field when there was already data The system will just refuse to proceed and you can imagine This system as an extension of that concept actually that is available also on the API level Typically these entity updates are used in DB update a regular DB update function And we will see a couple of those in in a few moments. So basically once your module Create has a new version that I don't know for instance adds a new defines a new Base field for instance, you just have to write a DB update function that notifies the system that you've had You've defined a new base field and the system will update the schema for you You don't need to do anything more than just tell the system that you did that The system could actually is actually able to tell there are differences and deal with them automatically there's a drush command to do that, but You should know you should use that only while developing in production environments a status report item indicating There's a mismatch between the definition and the actual schema is assigned something is wrong with the code is you've deployed actually and That code is responsible to provide DB functions to tell the system to update the the definitions the reason for that is that Applying in bulk all the changes required to reconcile the definition with a schema is May lead to unpredictable results because It will take the difference is introduced by all the modules that are installed in your system instead apply single updates from Every module responsible for the changes introduced is a way to ensure consistency in the update process So now let's have a look to the thing I mean all of all of what we talked about so far is theory is important But it's not something we will probably deal with every day Let's have a look to the actually meaty part. So the right way what you should really take away from this session You actually need to define your business model to the entity type and fill definitions And then as I said all the data all the entity field data will be loaded and stored automatically for you You will gain automatic integration with views rest rules and whatever exploits the Entity a field API correctly. You will get automatic revision Revisionability and trans-datability support for free completely for free as long as you keep Exploiting the core system the core API if for any reason you don't you cannot use for your field the storage provided by The API intively Sorry You can specify that Your field definition as a custom storage, which means the this system will won't do anything automatically for that it will just ignore it basically and and Actually, this is mainly used for custom for computed fields that are not Don't know do not need to be stored actually. So they are computed live But you can do that to provide your own storage Keep it just keep in mind that at that point you are on your own and you need to for instance Provide alternative storage becans if that's required Because the system will just basically ignore your field Once you have defined your data model. So you've just provided a few entity type and field definition Code around it Be this build your business logic around it Core does that actually it provides an interface Making the business logic as places for each entity type for so for instance, we have the node interface for nodes which defines a bunch of Accessors for the node fields that are Doing nothing more that formalizing that those fields are required and are what's needed to implement the node business logic The node default business logic This is is a good thing because it also allows to encapsulate this logic So for instance if you have an entity type and you know, you will have a field with some business logic attached with that But you don't have to the time to implement the field or you don't have to you don't want to do that in that specific moment you can still Put it on the interface provided. I don't know an empty implementation You can already write all the code that uses that interface and then you can provide the interface the implementation later and That's a good programming practice another advantage is that this approach makes working with ideas way better, so you will have auto completion and Everything will be way easier to write and In a sense, you're also making clear what you're required that a model because everything that's on the interface is what your code is Actually needing to work aside from the other API is it integrates with It's a good practice to replicate this situation core provides and Provide a wrapper for entities. We will see that we will see what I mean so that If you're adding New base fields you have your own accessors for those fields So basically you provide a small class that's wrapping the entity class and it's providing Simple methods that allow you to access the fields You you have defined it in a more strict way and you get the same gains I was listing earlier for the core interfaces So I guess enough talking. Let's see some code So we will see an example of Model that just want to display a stupid list of users that Have created at least a single at least one published note the amounts Total amount of created notes and the title of the note that was most recently recently created these may be quite tricky to implement actually with a single query because it might imply quite some joints and Aggregated query so as long as soon as the Numbers of users and nodes go up and you start having many of them. The query will perform quite badly So a typical approach to solve this is the normalization You add new columns to the table to be queried and you store data that allows you to Obtain the same result but forming a way faster query So what we will do in this example will add two fields to the user entity type That will store the data we need to perform this query and then we update those through the regular Events the entity API provides namely the classic hood node insert and hood node delete So let's have a look to the code Please tell me if you can hear me because now I have to turn my head again. So Let's go back So we start as I said with the field definitions These are the field definitions as you can see they are quite simple simple We are defining two fields Please note that we have prefixed them with a node module name So we don't clash with possible other modules and These as I said are base field definition and we are just using this factory method to create an entity reference Setting a label and setting revision ability. We don't need these in this specific example, but I wanted to show you this because It will make this property revisionable if revision ability is enabled on the user entity type So if it makes sense for the field to be revisioned It should it is a good thing to specify Specify so even if the entity type is currently not revisionable It will be just ignored if the entity type is not revisionable But it will it will be picked up when the entity type becomes revisionable and you will get revisions for free and the other field so this field will store the The most recently created node a reference for that node and these other field will store the count the node count So the total amount of nodes created by the current user Let's say now the wrapper I was talking about actually this this module is very simple You wouldn't need all this stuff. I'm just trying to show you the best practices. Anyway This is the wrapper. We have an interface I won't show you because I've not enough space, but it's just defining these My methods and as you can see in these methods We actually don't do anything more that accessing the fields the field values we need So we have a field returning the nodes the last created nodes I feel creating returning the last created node ID because in some cases the node is deleted And we don't have it and we have the setters. Sorry So then we have a service That's actually encapsulating all our logic So we will act on node creation and not deletion to actually track the creation and deletion of the nodes And so on creation as you can see We retrieve the node author We retrieve the wrapper with this simple method and then we set these properties So we set the node that's been created and then we retrieve the count and this is the interesting part So in we retrieve the count by expressing an aggregated entity query This is an entity querying no more. No less. It's portable and As you can see the syntax is very similar to the dbt-ng one But works on any storage weekend once we got this count We just saved the user and the data will end up in our new fields Same for deletion when we delete a node We will we get the author we check we get the wrapper and then we check that the deleted node is not the The last created node of the user if it is we retrieve a new one an updated one through this other Query and as you can see we have a new entity query even here and this is simpler. It's just retrieving the identifier of the last created node Loading it and returning it and Then we we set it as the new most recently created node We set the count as we've seen before and then we save and we are done and Then we just have to Retrieve the list of the entities to display and this is the method used to do that and it's again very simple It's another entity query that has a condition on the status of the users of the count we want only those that have created at least one node and We have an entity relationship So we are saying that we want to see only users that whose node whose most recently created node is published and This is translated to a joint in SQL, but it could be translated into anything else So let's see this in action So we first of all we install the method the module and now we can have a look to the schema as you can see now we have two more fields and These are our new fields So let's create a couple of nodes and see what happens This is our table and it's empty because we have no content Let's create one node and then create another one. Where is it? Here it is and our whole list is updated Unless I look to the storage Here are our field values and then we want to unplug and publish one node Well, what you are going to see it does not really make sense, but I wanted to Display the power showcase the power of the entity relationship So now we have unpublished the node the most recently created node He's gone. It's gone and All of this as you've seen it is portable you as long as the storage back end complies with the entity query and entity storage API and this will work on any storage back end and it's on SQL it's very performant because it's just a query on a single table Okay, so this was one example. You can have an applause if you want and now let me show you another funny one Basically now we will see an example of the entity updates. I was talking about so let me reset situation here so These are the update functions. I was talking about Disable them for now because I want to install the module fresh without update functions applied and without changes We are basically emulating the fact that we are providing two different versions of the same module and Actually, as you can see So this is the change we are going to perform We are going to alter the base field the node title base field definition and we are going to say that Since we are very smart. We are going to use Multiple no title instead of single no title. So we're going to change its cardinality So let's see what happens So now we installed the module and nothing is supposed to happen because nothing the module does nothing by default So let's have a look to the status report. So the status report is happy. Okay now We are going to uncomment this line and Have a look to the status report again Here it is the system is complaining and there's nothing you can do to resolve that issue Aside from installing a more correct version of your code, which is providing adb update function Which we will do in a second. Let's have a look again So now the warning is a bit more encouraging We have a solution and let's have a look to what happens Can you notice anything we are here? This is the node title dedicated table and We don't have a node title column in the no field data anymore because this is a multiple base field So now we want to see whether I'm cheating or whether this actually works So let's check See Apparently it works more or less works Well, let's have a look to the database now This is our node field data and no title in there because it's here Here it is and Yes, now we want to restore the previous situation as I told you we cannot change the schema when there is data inside So if in a real situation, you will have to move your data to a temporary table Change the schema and move it back to the new schema all in a single update function But since we don't have the time to do that. I will just delete the node for now and I will restore the previous situation. So let's enable these update function and Let's Command this line again Let's go back to the status report This is very dangerous So we have a new update and it's gone again and Here it is our no title. It's back again. So Am I serious? Yes. I'm a serious. This is currently working in the eighth. That's a module. You can find on github I will there are the links in in a later slide So you can just download the Presentation slides When we are done here and there are a few useful links There is the blog post link and a few links to the code if you want to have a closer look So is this completely working? Apparently so but yeah, we have still a few things to do as I mentioned earlier The table mapping API is not completed yet. So To actually switch table layouts for core entity types We still need to do some work on the actual storage handlers of I don't know nodes users and stuff like that because They actually at the moment assume the default table layout they come with But the API already supports that so as soon as we properly implement Actually as we properly exploit the table mapping NTI And API in those storage handlers Then core will be able to switch between this table layout and you will be for instance able to enable revision ability for users or Tags or files or anything you want or tons of ability. I think in core everything. It's translate ball at the moment I don't know why and Then we may need to define custom indexes and we have an issue to add a hook making possible to alter the produced scheme array definition scheme API array definition and alter in specific hooks excuse sorry specific indexes we may need and then we have another issue to Allow to define initial values for fields the typical example is The file entity module that needs to add a bundle field to the file entity and needs also to provide a default value For those fields actually the default value may not May not be the same of the actual default value So we need an API allowing us to work consistently for both base fields and bundle fields and Provide an initial value for those and then we need to support base field purging at the moment if your module as we saw if the active users module and Was if we tried to uninstall that module and we didn't delete the content We created the module could not be uninstalled because there is no base field purging Ability and so the system just refuses to remove uninstall a module that still has data so if you want to help with that stuff we have spring some Friday and Yeah, here you have some information useful if you need that Get in touch with me. We can figure out what you can do to help and Yes, as you may have guessed this is the end of the session So the two takeaways that two things you really need to remember of this session are those Use the entity field API to define your data model and code your business logic around it Leverage fields to store data and please please avoid custom storage if you cannot help that I'll always retrieve identifiers and load entities to access field data The entity query API is very powerful as you saw and so you should it should use it as much as possible So these are the useful links. I was talking about and I'd say we have five minutes for questions and answers Or we can have a Break Sorry, you need to walk to the microphone because the session is recorded. So otherwise the people at home won't see one here you So think Okay Thank you first of all for this nice talk. I just wanted to ask the can you stay closer to the mic because I can do like this Yeah, thank you for survival for this nice talk. I just wanted to ask if the It's of course possible to do the info alter hooks in YAML as well. So I've seen a lot of sequential coding there Could we go a little bit more to OOP? Well, I'm not sure I understand correctly. Are you asking whether it's possible to alter that information also through YAML? Yeah, so do you just when installing use the YAML? schema On install Actually, I'm not aware of any possibility of providing that information through YAML So I would say no actually I think if you are really interested in that you should build Your own thing that's that actually implements an alter hook and reads some YAML data and Does it's alterations based on that? Okay, I'm not sure what use cases you had in mind deployability maybe no You're just adding a certain fields to the user module So if I wanted to add a new node completely new node I would not go and use info hooks of course hooks would go and use the schema YAML for example or the Annotation discovery something like so I thought about using this for your example you use the user entity so Well, that sounds really interesting we can talk about it later if you want but actually I never tried to do something like that at the moment So I'm not a proper answer. Okay. No problem. Thanks You were filling around with the title field before yeah, and I was wondering what happens to the data Because you go from single value to multi value and logically you could copy over the data But when you go back from multi value to single value what happens then is there data loss or As I said, you are in charge to deal with the data in that case So it depends on you if you think that that title look nice that way And you just want to join the two values in a single string and store that be my guess But yeah another possibility is just to lose the second value It's it's up to you as I saw as I displayed in those update functions You can do whatever you want here here before these lines you could Move all the data in a let's say temporary table Do the scheme alteration and then here move back the data from the temporary table again into the regular storage And then you're done so by default it just deletes the data now by default it refuses to proceed because if you if that there's the data there and That change will require a data migration and the system will throw an exception. Okay, cool Does this this mean that we now have a revision all the users Not yet as I was saying we need to convert the user storage class to use the table mapping API and Performs its internal SQL queries in a way. That's Dynamic that dynamically supports switching between the table layouts once we do that and it's not actually that hard We just didn't have time to do everything Users may have revision ability. Okay. Thank you. It's just at that point it's just a matter of implementing an info hook an info alter hook and marking users is their entity type as revisionable and Applied update obviously Yeah, I just have a question. It's not a real use case, but I was just a funny thought What would happen if you changed a Schema to for the UUID to be multi-value. I guess you will break many things I've never tried to do that. I think don't do this at home is the proper answer Hi I'm I Supersight is about please close up to the mic closer. Okay, better. Thanks. Okay, so this is super sighting is So sighting that I want to use it today when I'm still using Drupal 7 especially the Entity query API Doing queries on relationships. That's super awesome. So Is there something that would prevent? someone To implement that in Drupal 7 apart from the fact that you have to assume that all the fields have to be in the same storage Well, could it be done? From off the top of my head. I'd say it would be just an addition because it would be matter of just supporting and a new syntax actually the What we've done was I didn't some join support But the code it's quite complex there and but I don't really Think I cannot really think about the things that we've done that couldn't be done with the BTNG so I'd say it might be possible to provide an alternative Entity field query Beacons that does that you may want to talk with chicks about that because he was the one that actually coded that part Cool. Awesome. Thank you. Thank you I have a question sure last question. I'm afraid because it's 2 p.m. Yeah about The bundle fields. Yes, they are for the same for every part of the Entity because at the moment in Drupal 7 when you create a new field like image It's shared between users and nodes. Yeah, and in Drupal 8 you will not share them between node and users But you will share them between the node types. So our content types But you're prefixing also the table names so the The pattern we have now for naming fields is shrinking Do you know how many Tekken or how many characters you can have for the machine name of fields? That's a good question. I don't remember the exact limit, but we have an Algorithm that will actually append a hash of the all the information So we will never actually run out of the limit We will just lose the readability of the name after a certain thing amount of characters Okay, so in the database it will work and it will look nice in the user face. Yeah, great. Thank you Oh, and one thing I always forget about if you liked this session Please go on that link and evaluate it. Thank you again