 OK, so welcome, everyone, to my session today on dynamic migrations using templates like this between quotes. My name is Daniel Siporsh. I am, I mean, I run a web omelet. It's the blog where I should be actually writing more. I also wrote this book of which I have a copy today for someone who will answer a question correctly, let's say, hopefully. So what are we going to talk about today? So we're going to go through a bit of a theory around migration and the migration API, also a bit outside of Drupal, and then see how this works in Drupal, essentially. Then we're going to actually talk about what the migration is from a Drupal point of view, a bit of an introduction to the ecosystem, kind of like who are the guys there. And then we're going to see two main examples. We're going to do a basic migration to illustrate these concepts to begin with. So this is going to be a simple one. We're going to migrate some taxonomy terms. Obviously, we're not going to do any live demos, so OK. Doesn't work. And then we're going to go to the more advanced topic of dynamic migrations using the so-called, what I like to call templates, but which are not. So all of the code that you will see in these slides is there in that repository, along with a lot of other stuff regarding migrations and plugins, et cetera. So definitely do check that out. You can set up a site very easily just by running a couple of commands using Docker. We're actually using the, yeah, it's OK. So before we get into the meat of the thing, I'd like to clarify what I mean by the word templates, because definitely don't think about real templates from a Drupal point of view. There's no tweak going around. They were, in the early Drupal eight days, we were talking about templates because they were config based. But now, actually, they are plugging derivative. So that's going to be the main topic towards the end that we're going to talk about. But yeah, before they were used mostly for Drupal to Drupal migrations. OK, so outside of Drupal first, what are we talking about when we are discussing about migration? So a very important concept when it comes to data migration, something that is found a lot in data warehousing and things like this, outside of my pay grade, is the extract, transform, and load procedure. ETL for short, I will be referencing it like that for now. So what does this entail? So essentially, it's made of three parts, three steps actually, for getting data from one place to another. So the assumption is that you have data in one place in the source, the source data, that you need to get to a destination. So how does that happen? Well, in these three steps. So the first one is, I mean, it's all written there. The first one is reading the data from the source, extracting it, and understanding it. That data can be in many, many different shapes or forms, many different places, et cetera. So the first step is being able to understand the data. The second step is cleaning it, transforming it, shaping it, modeling it for the new application, the destination where it's going. So that's a very important part that we always have to do because usually we are migrating all legacy stuff to newer, hopefully improved architectures. And the third step is the load phase. Well, the load is a bit weird to say, but essentially saving the data into the destination. So there are specificities to that destination application, and this step is responsible for after the data has been transformed for saving that. So I put saving parentheses because load for us in Drupal kind of means the opposite. But so some of the aspects or characteristics of this procedure are that you can have multiple sources, right? So you don't necessarily have or bound to only one source, you can aggregate multiple sources. And the fact that these three steps are separate allows to do this. The architecture of this source, so that the model, actually the data model of this source data can be different, and usually is different, and should be different than the one in the destination, right? We typically encounter old stuff, and we want to get it into new, better applications. So imagine migrating from Drupal 6 to Drupal 8. You're not going to want to have the same content types, the same exact fields, because that's not good. And obviously for the second step of this procedure, where that comes a lot into place, that the source data can be really dirty, inconsistent, makes no sense, missing parts, things like that, that we need to kind of take care of. So now we go into Drupal and see step by step how this ETL is actually reflected in the architecture of the migrations in Drupal. So one of the first things that we need to talk about before we kind of cover these three steps that I had referenced, is see what is a migration in Drupal speak. Basically it's a YAML-based plug-in, right? Everybody knows what plug-ins are. Everybody who knows what the plug-in is and can give at least one example, can you raise your hand, please? Just like to say, excellent, perfect. So for the rest, plug-ins are essentially encapsulated bits of functionality, which are obviously reusable, but more importantly, swappable. So for example, a subsystem can define a plug-in type in order to perform a certain task, right? But that task can be performed in one or 10 different ways. And usually, in Drupal at least, a user will choose how that task should be performed. So imagine a field. For example, a field display, for matter. This field I want to render it like this, and this other field I want to render it like this, right? So I have two plug-ins for this. And the user, when it configures the field type, specifies which field for matter to use. The YAML part of this thing I'd like to also mention is that most plug-ins in Drupal, well, many plug-in types, plug-ins in Drupal8 are discovered by way of annotation. So that's doc block, weird doc block above the class that contains the plug-in. But these are YAML-based plug-ins. So that means that their entire definition is, I mean, it's in the YAML files decorative, it's there. We don't actually write necessarily PHP. We don't have to necessarily write PHP code for them. There's a few examples in core of YAML-based plug-ins, for example, menu links that are redefined in YAML. We will get back to this example a bit later. So what does the migration contain? Now that we know what the plug-in is, et cetera, in this YAML file, essentially we orchestrate this whole ETL process. So we, of course, apart from some metadata information, ID, label, we will see. Things like this, we also have the crux of the matter is the definition of these three steps of the ETL process. And these take the form of other plug-ins, plug-in types, namely the source, process, and destination. I'm sure you can already kind of map these three things to the concepts in ETL. So we have the source, which deals with first step reads, and the process, which transforms, and the destination, which loads or saves. We will cover them individually in the next slide. So essentially, the migration plug-in configures these three things. In addition to that, of course, there's some other things underneath that are important for the process, such as keeping a map of IDs of source data versus destination data, so the system can know later on, OK, this source item, match to that entity, et cetera. So let's talk a bit about this very small slide. Let's talk a bit about these three plug-in types that come into play. So as I was mentioning before, the first step of the process is the data, the source data that we need to move to the other place. So the most important, the two biggest things that the source plug-ins do, and that's in bolder, is read this data, that's one, understand it, and iterate over it. Because usually, data is in sets of records, so we have to iterate over them. So the source plug-ins are able to do this. And they are able to do this for various different data types. So you can have CSV, as we'll see later down in the slides. We have SQL, JSON, XML. What have you, text files, spreadsheets, doesn't matter. There are source plug-ins for those things. And if they are not, you can write your own. That's another brilliant part of the Drupal 8 plug-in system, is that when it's missing, you write your own neatly encapsulated bit of logic that you just plug into the thing. And well, it doesn't always work, but it works. So there are a lot of source plug-ins available already, like for example, for CSV, for SQL, there's stuff around JSON as well, XML. But yeah, you can obviously write your own. Now, the process plug-ins. These are the fun ones, right? So once the source data has been read, record has been understood, it's being passed to the process plug-ins. There are three main things that the process plug-ins do. The first one is that they map the data values to our Drupal fields, right? So imagine we have a content type with fields. And then we will keep with the CSV example as we go forward, just for the sake of illustration and ease of explanation for me. So imagine a column, right? The value of one of the column records needs to go into this field. The process plug-in is there to map this value into the field. That's its most basic and critical function. Without that, we will not have migration. So we need them at least for that. Then we have the transformation of the data and the preparation. So for example, we can break it up into multiple values. We can do all sorts of whatever we want on the data to then send it to the destination. So we have this preparation as well available. The third one, we can also clean the data, right? We can perform these kind of alterations like replacing values, parsing stuff, turning markdown into HTML. Of course, there are better ways of doing this than directly in these plugins, which is why I did not mention them on the slide. We've had a training earlier this week on using the middle format approach, by which we kind of delegate this cleaning aspect to another system, which is independent from Drupal so that whenever we work with data in Drupal, we are always adhering to a known and reliable contract of this format of this data so that it's also testable and reviewable, et cetera, as it goes into Drupal. So basically, what's in that format needs to exist in Drupal and so on. But you can also perform these things in plugins. I have, so. Another aspect to these plugins, which is very, very cool, is that they are chainable. What this means is that you can have multiple process plugins for the mapping of one single field, meaning that imagine a pipeline of things, of plugins, of process plugins. So in comes the source data, goes to the first plugin, performs the alteration, passes it to the next one, which receives the altered value already. And like this, you can create, you can really alter your data however you want. And we will see, I mean, these are definable in the migration. I mean, we have many plugins already available in core. There's a list like this. We have in contrib, it's just a matter of looking what exists before writing your own. Worst case, you can super easily write your own. It's a method you need to implement inside of the plugin. It gets the value you return what you want that value to turn into. So it's actually really easy. You just have to be creative. The destination plugins, once the data has gone through the process plugins, are responsible for saving the data. They are closely tied to the website, to the application. They know the specificities of how that should happen. So in our case, we're talking about mostly entities, content, and your config entities, that we are migrating to. And they know how to save and delete when we roll back. So they are our use for that. You won't probably have to write your own. You use the core ones, and that's pretty much it. It just works. Another part that I want to mention in this whole ecosystem, now that, OK, we have the migration. We've got the orchestration of the three plugin types. How does this thing work? How does it run? In comes this guy, the Migrate executable, which does what it does. It executes the migration. Namely, it imports the stuff or rolls it back. And that's really, really cool, because we have the opportunity to roll back as well to a clean state using that ID mapping that I mentioned before. So essentially, what it does is asks the migration. Give me the source plugin. It's the source plugin. Ask the source plugin for the data one by one. Pass it to the process plugins one by one if we have changed per record. And then once that comes back, asks for the destination to then save it. Then rolling back, not the opposite, because it just needs to delete the end result. But that's pretty much it. It also sets the status and messages and everything like that in case we have problems at the individual record level. A couple of words about the ecosystem, because this is very important. Without this, we can't have simple migrations even, so to speak. So three guys I would like to mention here. Migrate Plus, Tools and Migrate Source CSV. The latter, because I like it, and because we are going to use it in the example going forward. So the Migrate Plus module comes with a bunch of extra source process plugins. For example, String Replace, Skip On Value, which I'm sure I don't have to explain what String Replace would do. So you just say what you want to replace and with what. And it's just declarative in the ML5. Skip On Value, skip a record if it has certain values or not. So I definitely urge you to check what's in there for more examples of this. Another important feature or thing that it comes with is a source plugin for URLs, basically for an endpoint. Whatever that may be, it can be a file. It can be an HTTP endpoint, et cetera. But it comes with this source plugin that you feed it to this endpoint. And then it provides two more plugin types in order to do something with this endpoint. So the data fetchers are able to retrieve the data from the endpoint. So for example, you're going to have a file, data fetcher, or a URL, like an HTTP thing, or a directory, whatever. And then once that's been brought in, we also have the parsers which are able to understand what the fetcher has brought in. And there you have things like JSON. Maybe it's JSON, maybe it's XML, maybe it's whatever. So with these two guys, we are able to fetch from whatever endpoints we want. Migrate tools, actually maybe we could get away without using migrate plus. But without migrate tools, good luck. Because it provides the drush commands that we need in order to run the migration. So even if you wrote all of our nice little migration, we can't run it unless we write the code to run it. So programmatically, so it is critical. Obviously, in order to do this, it extends the migrate executable and also provides some extra features like the ability to import only certain IDs from the source and things like that. I'm sure you're familiar from Drupal 7 in which we had a similar capability. Finally, migrate source CSV is an example of a very small module. There's one thing, provides a source plugin for CSV. We give it a CSV file, well, path to a CSV file. These guys are able to read it, iterate over the individual records, provide the data. And we will see how it actually works in practice. OK, any questions so far? Well, maybe at the end. Now let's see how this actually works in practice with a very simple migration of this data. So we have a CSV file. As you can see, we have an ID column. Doesn't matter for our purposes what these IDs are. But if you figure out how those IDs, the pattern of those IDs, kudos. But we have two columns, right? We have label en and label oro. I'm Romanian, that's why. And essentially, these are categories or whatever, of stuff, of food, whatever, product. So in the first migration, what we're going to do is create, import these things as text on midterms in English. Forget for the moment the Romanian translations, we're going to get the English stuff in. This is the plan. We have a vocabulary for now English. We'll get it multilingual later. Check out this repository, three commands with Docker. You're up and running with the website, with the migrations in place, with all sorts of stuff in there, just as an example, including what we are doing now. And you can immediately just run the migration to see how it works. So this is what it looks like. Everybody can see. I'm not sure if it's big enough or not big enough. So migration, as I mentioned, go into YAML files inside of a module under the title that I put where this file needs to go. So under the migrations folder. And the file name needs to be namespace with the module name and then dot migration and then the ID of the migration, which needs to be unique. And then inside, we proceed with the definition of this migration, right? So you can see it's all decorative. We don't have any PHP involved. And we start with things like that metadata ID label group so that we can group migrations later. We run by group multiple migrations, et cetera. And then we have three keys, right? Source, destination, process. I should have put them in the right order. The very first one, as you can see, is the source. This is where we define the plugin to be used as the source. We say CSV is the ID of the plugin. And then what's underneath is the actual configuration of that particular plugin. If we used another one, all of the lines underneath would be different because they are not specific to CSV. As you can see, you can understand already, OK, you have to say a path where the file is. Maybe there's going to be another one with path, but whatever. Then we have to say how many rows are the headers in the file. You can have one or two or more. We just say that. We specify, OK, the main ID, which column is going to be the key for the unique identifier for that record. And then we specify the column names, right? We have ID label en and label ro, right? We kind of define, we tell it how the CSV file looks like. That's it. Then we have the destination definition in which very simply we tell Drupal, OK, save these guys into taxonomy term entities. So it's very simple and straightforward. You don't have to tinker with this too much. And then we've got the process plugin. And we only have two very simple mappings to do. First, I'm going to talk about this one first, which is the name. We want to use the label underscore en column, right? So that is the column in the CSV that we want just to copy over. If we omit specifying a plugin, as we do above, we then defer to the get plugin, which essentially copies values over. It doesn't do any other alteration, just copies over. Otherwise, we specify what plugin we want this process to use for this field. And for the vit field, which is the vocabulary ID, we use the default value plugin, which says, OK, for all the records that you import, use this default value. And we want this value to be categorized because we want the vocabulary to be categorized. That's the name of the vocabulary. So all the records will have this. It has no bearing on the data source. So this is pretty much it. As you can see, not rocket science. This can get more complex, for sure. We can have all sorts of plugins. Like, for example, here, we can put an array of plugin definitions to make it chainable. And it goes down that pipeline. That's it. We can run our migrate tools commands now, migrate status, to see, obviously, how to clear the cache, enable the module, normal stuff. Then we can see what migrations are available with migrate status and then import or rollback our migration, specifying which name. So yeah, that's pretty much the basic migration. No PHP code. OK, just check the time. So we have now the migration in place. We ran it. We've got terms with our labels in English. Now we want the other column. We want to translate the terms. We want to import the Romanian translation. We don't want new terms. We want them to be translated. So income, the plugin derivatives. What are they? So plugin derivatives, we know by now what the plugin is. A plugin derivative is an instance of the same plugin, meaning that we can have dynamically generated plugin instances of the same thing. So they are defined statically, right? But then they are enriched dynamically, depending on the state of the application. So it's kind of like the hook info, hooks in Drupal 7, when we had for each loops to create multiple, like in hook block info, core would create multiple blocks, would loop through existing entities and would create a block for each, like menus, for example. And then, but in Drupal 8, it's object-oriented, so we use a deriver class to generate. OK, so now I have a question for the room. Can anybody give me an example in Drupal 8 of plugin derivatives, of usage of plugin derivatives? OK, I think you raise your hands first. What's the block differences, for example, would be integration by what would be? In core, I mean, an example from core. OK. Local tasks, for example. Those are, yes, yes, yes, yes, it's a good example. Yeah, it's like many links, as well, indeed. OK, so after come and you can get a copy of my book. So why do we need derivatives for our migration? Why don't we write another migration or whatever for this? Well, imagine we have 10 columns, 10 languages. We have 10 languages on the website. We'd have to write and copy the whole thing, write the migration for each. Not cool. Also, I don't know if it would work. I didn't even try. But definitely would not be a fun exercise. So what we want to achieve is that we define statically one migration and then enrich it with the available languages on the site so that we generate an instance of the same migration for each language. And then that instance will know which column in the CSV contains the data in this example. Can have other examples. But in this one, that's what we're going to do. So then when we add another language, it automatically generates a new migration instance for that language. So how will this look like? Now we're going to look at some code because we need also PHP aside from the migration. So next to the migration we wrote first. We have another one now called category translations, which is essentially very close in definition to the first one. So that's why I'm omitting some things here. So I'm going to talk about the differences. So the first thing you'll see is that we specify a deriver, which is going to be responsible for generating our migration instances. So that's going to be categories language deriver. We'll see how that looks in a moment. Second, for the destination, we just need to specify, and this has to do with multilingual aspects rather than migrations, rather than derivatives, is that we have to tell it to save translations, meaning without this it would save new entities instead of trying to save on the existing entities as translation. This is part one. Part two. We go to the process part where the very first and most important thing you should notice is that we don't have the name. We don't know the name, because we don't know the column that we have the name in for a specific language, right? That's the dynamic part. But we do have some other important things we need to define, such as the it, the taxonomy term ID, because we want the terms to be added to the existing term. So we need to map them to the previous migration. So for this, we use the migration lookup plugin, and we tell it, OK, check the ID of the source, look in the mapping for the IDs, and put it in the same term as you had previously saved. So for that, we use the migration lookup plugin. Then, specific to multilingual entities, we have the content translation source which we need to fill in if we want translation. And for that, we put English for all records, because the source was English when we first translated. Then finally, we set the dependency on the previous migration, because these migrations won't be able to run before having run the categories one. So it's important to have this kind of. Now, we write the deriver. These are kind of like the bullet points of it. It goes in that namespace, as you remember. It extends deriver base, as all derivers do in Drupal. And then if we want to inject dependencies, which we do, we need the language manager. We need to implement the container-deriver interface. And essentially, we have one method that we need to implement, get derivative definitions, which gets the base plug-in definition as an argument. And that base plug-in definition is essentially a rare representation of this thing. The whole thing gets transformed into an array, and we receive it here, so that we can enrich it. This is what that method looks like now. It's definitely not rocket science. We have the language manager already injected. And then we start looping through all of the languages on the site. Skip English, because we had already migrated into English. And then we create one derivative for each of the languages. We will see the method that does that in a second. And then return all the derivatives. So this is where the dynamic part comes into play, because we don't know at any moment what languages there are on the site. So we loop through them, and then we create one by one of the derivatives. And this is the method that does the thing. Then the actual deal is to get derivative values. And we pass the base plug-in definition and the language for which we want to make a derivative. So here, we enrich the migration definition. As you can see, the structure is the same as we had in the demo file. We have process. And then we have the mapping for the field, name. Now, we have a language. So in this context, we have the language. We know what field, what column we want to look at. And we put that in the source, under the source key, of the configuration of that plug-in type. Skip on empty, I will explain. Label underscore and then the language. So it will be label underscore RO. So it will look in that column. But we use the skip on empty plug-in, which we also don't remember words from core or migrate plus. But it's very important in this case, because if we don't have a value in the column for Romania, so one of the things, one of the records is missing, we don't want the translation to be imported. One, because it will have no name. And two, probably to break. So with this plug-in, we say, OK, if it's empty, just skip the entire row, so that particular term will not have a translation, which is fine, right? Not everything has a translation. Had we not done that, we would have had probably broken data at some point. And then another thing that we need to migrate, also dynamically, so you can see we only focus on the dynamic parts, is the language ID of the translation. So here again, we use the language ID. And we set that for all records of this migration, we put that language ID using the default value plug-in we've seen before. We just return this, and that's it. Now we can clear the cache, run the status. So if we run our status, we will see two migrations for our data source, because we have two languages. So we have the original categories migration, and then we have the categories underscore translation colon RO for the ID of the language, which is specified here. So the derivative ID is made up of the main plug-in ID, colon, the sub-ID, I don't know, the ID of the derivative. So we'll have that. Now, if we add five more languages on the website and clear the cache and run the migrate status command, we will have five more instances of the same plug-in. Even if we don't have the data in the CSV, we have the migrations. It will have no records, I think. But we will have the migrations. And as we add stuff to the columns, as columns to the CSV, translations for these terms, we can just import the migrations without having to do anything else. So if we just use import all, it will first import the original migration, because the derivatives depend on that, and then it will import all the derivatives. And I think that is just a couple of conclusions. And then we can ask, and maybe it's time for some questions. First, migrations are awesome. The power is unbelievable. It's just a matter of creativity. And we can do a lot of powerful stuff. And here I'm not talking about Drupal to Drupal migrations. I do not use that. I do not care for that. Even for simple sites, I will export my data to a common simple format, just CSV or JSON. And then I will import that into the Drupal 8 site, which is essentially a rebuild, no upgrades. It's super fast to set up migrations. I mean, this was simple for sure. But even if you can go bigger, and yet it's still just as simple, because it's just a matter of more fields. But other than that, and then you have a lot of process plugins to do all sorts of stuff. And then when it comes to dynamic migrations, so for example, this was one example. Montilimo is very important for a lot of people. Also for me, I work a lot at the European Commission, where we have 20-plus languages for the website. So it's important to keep things in mind. But there's other examples. For example, I had the use case of importing product variations, Drupal Commerce product variations. So based on attributes and prices for various attributes, it would generate migrations. So it's just there for using. And that's it, I think, pretty much matched with the other one. So any questions? Yeah, so I'm going to run this. I was told. So yeah. You can take the survey and join for the contribution. So any questions? Yeah, and somebody just asked. I'll just start then. How do you deal with one-to-sex migrations? For example, I once had a problem. I migrated to the sixth side to the eighth. And the information architecture changed from visiting body fields to paragraph architecture. So I wrote a migration that just carried apart the HTML from the body field, but ended up having several entities as destination entities. So how do you deal with that? Well, first, I recommend the part of breaking up the body field into multiple values to do it outside of Drupal 8. So before, to come to a data format that is nice and clean and predictable. But then, once you have multiple values, you can explode them using process plugins to save multiple field values. So it's just a matter of configuring plugins for that. I did that a few weeks ago. Same thing? Yes, from an old Drupal website, you knew one of them using paragraphs. And we did those from a database. So I just created two migrations, the first one that created paragraphs. And using the same ID, creating nodes and just attaching paragraphs that were created a second before and attaching them. So it's actually very fast. Yeah, there's all sorts of ways. And with looking up between multiple migrations, you can connect these things a lot. And you can break down into multiple migrations. As long as in your source, you can map them. It's not that difficult. Anything else? Yeah? Yeah? Well, my question is about degree in paragraphs. We should have a kind of multilevel hierarchy. And what's up from Google server? And actually, just from my point of view, it is really hard. And the only possible way is migrating the last level, like really deep one, and migrating level by level to migrate the reference between them to Drupal 8. And probably you have more useful or convenient way of doing that. You're migrating from Drupal 7? From Drupal 7 to Drupal 8. Paragraphs. So in Drupal 7 paragraphs. It's multilevel. Yeah, are there references between the paragraphs in Drupal 7 to reflect this hierarchy? Yes, that gives the same reference. OK, so then that's fine. So the whole thing is that you need to get that data out of Drupal, first of all, and keep these references. So you can have all the paragraphs in one single set, as long as they can reference each other's parents, right? So then you have that. And then in Drupal 8, you can reference them again using that thing. So you can use stubs as well in case the record has not been created. But as long as you map the parent to use the same migration, so itself, it should work. For example, if anybody recognized, the IDs represent hierarchy between the terms. So in the repository that I referenced here, there is a process plugin by which I import these things and I import them as hierarchical terms. And it works based on the same principle. I'm not 100% sure how the paragraph referencing actually is, but I'm guessing the principles are the same. So in the process plugin, you can say, OK, look for the other one to mark as a parent. Yeah, I think one more question. Anyone? Yeah? Yeah, in the migration in YAML file, can you see a lot of keywords what I can use for the source process migration plugin? Is there a cheat sheet available, what I can use? Well, yes and no. So the source plugin definition is specific to the source plugin itself, right? So the very first thing you need to do is look in the code of the source plugin. This is always true for everything. First, you look in the code, see how that is built, and see what it expects. Some of the source plugins, other plugins as well, definitely the ones provided by Core, have documentation above the class of what they expect and how they can be configured. Contrib should have to not remember this particular one, but typically in the doc block. And worst case, you just check in the code to try to fish it out. I know that the URL one from and the data parser from Migrate Plus did not have such great because it didn't have the time to do it. So I had to fish out from the code like how I navigate through the JSON array. But typically, you'll have some of this information. The doc blocks, it can get examples anyway. Sorry? The doc blocks on the plugins can get examples. Yes, they tend to give examples, especially the ones in Core, process and source plugins. But I mean, yeah. There is a cheat sheet for the process plugins of Drupal Core and Migrate Plus, I think, on Drupal of Work. Oh, yeah. The whole process plugins. Yeah, OK. To be honest, I personally don't care because those things can be outdated. They can be wrong. The best advice I can give always and usually for everything like this is check in the code in the action. Not even the comments, but see how it is built and what it kind of expects because then you also get an understanding of the internals of the thing. And you'll be able to visualize it better in your head how the things work together. And then next time you use it, you kind of know better rather than have to remember what that thing is. This is why I don't even know where this documentation is. Yeah. So I think that's it. It was still 11, right? OK, one more really quick. And does the CSV source plugins support multiple paths? Yes. I don't know. I can imagine that like putting all your translation in one CSV can be forward at some point. So having like separate CSVs to each translation is more maybe convenient. Sure. Could your directive solve that issue as well? To be honest, all the definition that you can see here, you can also omit from the static declaration and fill it in the derivative generation. So basically when you create the derivative, you can specify in which file the values for that language exist. So you can have files keyed by the name, by the language code, and just point to that file, you see? And then each actually, you remove the path from there. And here you add in the base plugin definition source path. You put the path plus language code. And then of course you also have to specify which column, et cetera. But yeah. OK, I think that's it. But I'm around so we can talk more.