 Let me introduce myself quickly. My name is Mark Hatley. I've talked at ELCs before, usually around the Yachto project, things like that. This one's a little bit different because I'm using the Yachto project, but it's very focused on a specific workflow, primarily heterogeneous systems and how we can use various tools to generate them using the Yachto project itself. Just as a word of note, if you download the slides off the website, I'm going to be skipping some of them just to make sure everything fits in the allotted time. So if I go fast through a couple of things, it's pretty much just background material. It's not really the meat that I'm going to skip through. So with that said, the first thing we need to do is define what is a heterogeneous system, at least in the context of this. And just going back in my past, we've got some good examples, VME, Compact PCI. You can drop in cards of various types. They all share common backplane, share some devices. But effectively, every device is kind of its own thing, just wish them shared resources. But getting a little bit more modern in the way we do things, you've got virtual machines and containers. Those are heterogeneous devices as well, because they're going to be running different operating systems. They could be running no operating system, bare metal, et cetera. So you have to put that in context here, is that that's part of what I'm talking about is the ability to generate virtual machines and containers that work together to build that heterogeneous system. Another example is something like the TIO map. You have a main Cortex processor with a DSP. And so they do not run the same instruction set. They don't use the same compilers, build systems, all the rest of that. So how do I build these things together that would be traditionally a firmware component or something built externally? And then the actual example that I'm going to show, which is the previously Xilinx, but now AMD, Zinc Ultra-Scale FPGA, it contains a Cortex A53, a Cortex R5, a microblades, a platform management unit, and then also, because it's an FPGA, somebody could implement a software-based CPU in it. And so you get a very heterogeneous environment in a single system on chip. So this is part of the reason why I think this is a good example for how to do this in a more complex world. So as I mentioned, one of the complexities is beyond just different types of processors. You run into the case where you're going to have different build systems, different compilers. You're going to be targeting different runtime environments. You have each runtime environment has a different view of the shared resources of the system. Some of these things cannot be built in the Octo project. They can only be built as binaries. How do you incorporate those things? How do you bring them in? And all of those items were part of the design of what I did on the Zinc MP. But let's first talk about the system device tree. So first off, if you're not familiar with device trees, the main thing is that a device tree is going to describe the hardware components from the view of the operating system, for instance, Linux, and how that operating system is going to talk to and manage the components. That means in a heterogeneous environment, we need to have multiple device trees, one for each operating environment. And then how do you sync those together? How do you make sure that they match your current hardware design? You iterate your hardware, you iterate your software design. You have to keep changing these things and making sure that they don't conflict. And this becomes difficult to manage long term. You can set up process and procedures around it, but it's still a manual process for the most part. Earlier today, there was a system device tree topic, which actually covers some of these things. But effectively, that's where we start to go here. Let's describe the hardware once. And again, the hardware in this case could be a virtual machine configuration. It could be my Xilinx FPGA. It could even be a VME chassis with five boards in it. It's still one hardware description, one system device tree. And it doesn't take the view of any individual operating system. It's really going to describe it all. And so somehow you have to transform that system device tree into the more traditional device trees. That's where the lobby utility comes in and the drivers for the lobby system. And so just a really quick example on the left. We've got three different CPUs with all the corresponding hardware assigned to them. We run them through some lopping and transformations. And then on the right hand side, we generate CPU specific views of the system for the individual operating systems. This can certainly be taken even further. Going into Zen configurations, VMware configurations, even container configurations where you only want certain hardware to be visible in a container. And so you can really start to limit things down from an application point of view if the applications are using system device trees. So let's skip to the Yachta project. Most people probably know the Yachta project, so I'm not going to really dig too much into this, but I just wanted to give a reminder or a refresher to folks. The main thing is the Yachta project itself is not a Linux distribution. It is a distribution builder that usually focuses on Linux. So what that really means is I can build Friartos. I can build Zephyr. I can build Linux. I can build bare metal. You name it. But I don't have to build them in the Yachta project system. All of the configurations start with a machine. This really defines your hardware capabilities. The machine itself internally defines a default tune. This describes the CPU that you're targeting, the instruction set that you're targeting. All of the components, the metadata, the instructions, your configuration are captured either in your build directory or in individual layers. And by using layers, you can share that with other users. And then inside the layers, you have recipes. These are the individual instructions used to actually generate components like Bash, or G-Lib C, or the compiler, kernel, that kind of a thing. And so when you run a build, such as BitBake Core Image Minimal, what it's going to do is it's going to load the system configuration. It's going to target the core image minimal recipe. And that core image minimal recipe has a series of dependencies on other recipes and other components in the system. And then we'll put them together, assemble them, and create an image for your system. And so again, the key configurations is you have your layers, your local.conf. Together, you get a basic configuration directory that looks like this with just two configuration files that point to the rest of the system. This is what most people are used to seeing. But if we start to talk about multi-config, and that's where we get into this hybrid, heterogeneous environment, you can actually specify multiple configurations that will work in a single build. And not only will they work in a single build, but you can have dependencies between the various components of the multi-config. And that's what starts to drive to more of a system design. And so for the main multi-config, what the general recommendation is, is that your multi-config configuration file is going to work similar to your local.conf, and you will want to specify a machine and a distro, and you also have to specify the temp directory where you're going to build the software. So three items is what most multi-config are going to have in them. But there's nothing actually in the multi-config that limits to those three items. There's nothing that says you actually have to use those three items. You can put in different parameters. You can override whatever you want. And as long as you end up building the system in those components, you're good. And so this is where I deviate a little bit from the basic instructions that you would have seen from the Yachto project, is I'm setting some different values here. But essentially, I'm still following the pattern the Yachto project designed and everything else. And so just a quick example, be a multi-config is really the only thing that gets added to your local.conf to use multi-config. And if you look on the right, there's a multi-config subdirectory underneath conf. And that's where you put your individual multi-config components. And I've got some examples here that you can look at later that just show how the dependencies work and some other things. But the key thing is there are dependencies that you're going to want to be able to use. And so now let's take a look at what I actually did. This is really the meat of the presentation. So we want to be able to go and generate a system from a system device tree and go all the way down to the image side and have a total system that I can just run one build command and actually get something to execute and run on. Without having to worry about how did I build this little piece of firmware over here? How do I build this thing? How do I configure this other thing? And so it all starts with some of the Zinc MP tools. And again, this is very specific to this environment. But some of these terms I'm going to mention because I will probably save them later on. And I just want you to know what I'm talking about. So Vivato is the software that actually does the HDL design and basically describes the tools. From Vivato, we have a hardware description that is saved in what we call an XSA file. And that's really just the hardware description itself. There is a tool called DTG++. And this is actually a device tree generator. And the plus plus part is it actually generates system device trees. And it uses the XSA file's inputs. We then have these components called embedded software or ESW components. And these are actually the bare metal components that run that platform management unit. Or maybe they run bare metal on the R5 processor to do some real-time activity or something like that. The ESW software is all based on New Lib. And New Lib needs a hardware description library. And that hardware description library is LibZill. And so we need a way to take the hardware description, compile it into LibZill, and actually make it do something reasonable. And then from that, we generate our FSBL, which is our first stage boot loader. First stage boot loader calls Uboot, which most people are familiar with. We have our PMU firmware, our platform management unit firmware. And this is actually what handles all of the underlying system behavior, including FPGA, behaviors, loading, unloading, locking regions, things like that. And all of this then, in the end, when we generate our images, anything that goes into the flash memory uses a program called Bootbin, which is a format, BIF format, that actually puts these things together and puts it in a way that the hardware can come up from power off and actually load whatever's necessary, bring the system up, and then eventually get to a standard operating system, like Linux. So everything begins with the hardware flow. We output an XSA file. Now is the part that I'm interested in, because I'm a software engineer. How do I get from that hardware description to something I can use? And this is that DTG++ file. And so when I pass in my XSA, run it through there. It parses it, pulls files out, generates components, everything else. And you end up with a directory tree similar to what you see on the right. In that tree, you've got an include directory that has a bunch of individual bindings that say, this is how I operate the clocks in the system. This is how I reset the board. This is how I talk to the GPIOs. Further down, you can see a bunch of DTS and DTSI files. Those are the items that actually describe that system device tree. It all starts with systemtop.dts. And then there's two additional files, PSUNit and PSUNit.h. If you've ever tried to bring a board up completely from power off, you know that there's a bunch of magic parameters that have to be initialized and everything else. PSUNit and PSUNit.h actually describe all those parameters. It's probably not human readable. It's just put this value here, put this value here, put this value here. But the fact is it's all generated by this DTG++ software in a way that then later things like the first stage loader or bootbin or one of the others can actually access it and use it. So then we get into system configuration. I want to start with that system device tree and those header files. And I want to generate my system configuration. And so we developed a tool, and this is very much proof of concept or prototype level called DT processor or device tree processor. It's able to read the system device tree, has the knowledge of what the Xilinx parts look like and how they operate and how they need to be configured and actually generates a Yachta project multi-config. And so when I run DT processor, I'm going to say my configuration directory is here. I'm going to say my system top.dts is here. And I want you to write the configuration in this other location. And so it ends up writing a local.config fragment that looks like the lower box. The main thing that here is that it's got a requirement that it brings in a generated file called Cortex-A53 Zinc-MP Linux. And that's actually what defines all of your machine capabilities. Then we have our BB multi-config that actually defines, hey, this is the whole list of everything that I generated. These are all heterogeneous targets that I'm capable of building and using as dependencies. And then there's just a bunch of override settings for the firmware and operating system. And so for a quick view, if you look on the right, you can kind of see the map of everything that's generated. And so if we focus on that first part, because that's really the meat of this, we end up with looking at what the processor generator is doing. And so the first thing it's going to generate into that Cortex-A53 Zinc-MP Linux basic default configuration is it's going to say, hey, here's my Linux DTB. Because I need my Linux DTB in order to run Uboot. I need it to run Linux. And Linux is the center of this design. Everything spawns off from Linux doing something. But you don't have to do that. You could actually say, maybe Zephyr is going to be the core of my design. And Linux is the multi-config or the secondary operating system. We also defined that this thing is a Zinc-MP. And because it's a Zinc-MP, we're going to use the Zinc-MP generic target. We know that all of the system on chip parameters are going to be set, but we don't know any of the machine-specific parameters. And then we just set a couple of overrides for other components in the system. And this is just really to generate or use the generated environment that we have. And so if we want to look at what it's actually doing under the hood, it's running Lopper. And as Bruce said earlier in his talk, his Lopper is capable of converting device trees from one format to another format, but it can do a whole lot more than that. And so we use it to also generate our configuration fragments. So for instance, this particular one is using to generate the Linux device tree. And all it's going to do is a Linux device tree. But we're using three different lops here. We're lopping off the A53-specific items. We're then adding the Linux domain information into it. And then we're pruning certain information that we know may have been in there and isn't necessary for our Linux configuration. And so we start with a system top. It processes it. And then the output is a DTB or binary device tree. But let's talk about some of the other things that we can do here. MicroBlaze. MicroBlaze is a completely configurable CPU core. And so that means the instruction sets, there's 12 different versions of the instruction set out there. Each version, there's variations that have multiplication or division, that have long words, short words. You name it, it's a very, very configurable CPU type. And so we actually have to be able to tell the system, not only do you have a MicroBlaze, but it's this version with these capabilities so that we can set the tool chain configuration properly. And this is where that default tune comes in that I was talking about earlier. We use it, and we load in the system top. We have a special processor, this lot MicroBlaze Yachto. And the output of that is actually the configuration that you see below, which is we add an available tune, and it starts with CPU 0. And in this case, CPU 0 is the PMU. It says, I'm on MicroBlaze. I'm version 9.2. I've got a barrel shift. I've got a pattern compare. I've got a reorder, and I don't have any math. So I'm FPU soft. And then from that, it then, the last item says, by the way, I'm also an alias myself to tune MicroBlaze. So if something says, I am a MicroBlaze PMU, it will find the correct tune, which is CPU 0. And that's just all it generates, and it makes it available to the system. So then we expand to the next level. What about bare metal configuration? I was talking before about that LibZill component. How do we generate the LibZill item so that we are getting the hardware that's available on our system, but we're not referencing hardware that we don't have enabled on our system or have never implemented, again, because it's an FPGA? How do we make sure that the size is, say, small because it's bare metal, and maybe we're resource constrained on the barrel components? And so again, we use Lopper to go through the system and transform things. And so the transformation that goes here, and this particular example is, again, processing the A53 processor. But there would be transforms for the Cortex R5, as well as for the MicroBlaze. And so we start by pulling out the A53 specific components. We then go through and process this and generate a bare metal version of the tool chain. So it's all of the A53 in this case. We don't prune it. We don't modify it or anything else, just the A53 related components. We then use Lopper a second time to go through. And we actually process that system device to say, hey, what hardware is available in the system that happens to be available on the Cortex A53? And then let's go to this embedded software component, which has a table of, these are all the pieces of software I understand. These are all the device nodes I understand. And it goes out there and it can actually do mapping for configuration data. And effectively, what ends up happening is two configuration files are generated. A distro.conf configuration file specific to that domain of the system. And in there, it will set distro features, AV buff, actually PMON, et cetera, et cetera, et cetera. And then in the actual recipes themselves, we add package configs for the libzil package that will say, oh, if AV buff is specified in the distro config, I'm also now require the AV buff component of the system, which is just a fragment of the library. And then the final libzil comes together and links together all of those fragments. So we know libzil always matches the hardware one to one. And the other advantage of this is let's say you come back tomorrow, you reconfigure your processor, and you turn off AV buff. You don't need it anymore, you don't have it in your design. You don't have to rebuild the entire system from scratch anymore. You'll regenerate these things. It'll go through and say, oh, I don't have AV buff anymore, but all of the rest of the components I still have. I only need to relink the libzil library without AV buff. And now I have an updated version of the components. So it becomes highly configurable based on the system device tree and based on the OctaProject distribution configuration. And so if we look at actually this bare metal configuration file, and again, this is the microblaze one, you will see that we point to the configuration a dt file, the one that's specific for that domain. We then point our embedded software to a specifically named machine, which is very Xilin-specific and wouldn't be the general case. We then incorporate that microblaze configuration file that describes what the CPU is. We then change the system default tune to be microblaze. And so you'll see in here that there is no setting in this that says, this is a different machine. I'm not, you won't see a machine equals something else. All of our configurations use a single machine. It's incomplete generic. And what they'll do is they'll go in and change the default tune to say, I'm still incomplete generic. I still have the same machine-wide settings. But I have a different configuration from my processor, so I will change my default tune to be microblaze. If I'm on the Cortex-R5, you'll see a default tune equals Cortex-R5. If you're on the A53, it'll be a default tune Cortex-A53. Slightly different than the way the OctaProject does it, but it actually really simplifies it when you have a single system on chip that does this. And it's well within the parameters of the OctaProject. Because the default tune is considered one of the machine settings. We then have some other things in here. And then, like I said, tempter has to be specified. Each multi-config needs its own tempter. We have a special distribution called Xilinx standalone. That's just for the bare metal components. But let's say this was Zephyr. It would be Distro's Effer, or in Linux, Distro's Linux, or FreeR Toss, or whatever. And so collectively, this overrides the default file that's Linux specific with the specific things for bare metal, in this case, or any other operating system. And so if we go back and look at originally what was generated, you can start to see on the right-hand side the actual files that were generated. And so you've got the Cortex-A53 ZincMP Linux.com file. That's your system-wide definitions, but from the view of Linux. You then have the individual DTB directory, which has all of the DTBs for all the various heterogeneous components that would be in the system. You then have your microblaze.conf, which is these are all the microblazes that are available on it because it's a dynamic processor. You then have all of your multi-config, which override your default values. And it's only the items that are overridden in there. And then an include directory specifically for those bare metal components because of the way we implemented our bare metal. It's, again, configurable from a dynamic basis. And so then we get into, OK, how does it actually work? Now that we have the system configured, we need to be able to actually do something with it. And so there's this series of overrides that say, if I'm building my first-stage bootloader, I depend on the following thing. And you'll see it doesn't depend on anything, but that's current context. If I'm doing a first-stage bootloader, I actually depend on something else, building the first-stage bootloader from the bare metal configuration. And so this is where that MC colon colon, Cortex A53, ZincMP, FSBL bare metal, FSBL firmware due deploy comes in. In other words, I have to run the due deploy from the FSBL firmware in the Cortex A53, ZincMP, FSBL bare metal configuration. That runs first. That's built the software. I don't care how it's built. It just goes and does it. And then we have the deploy dura that says, oh, I'm going to go to this directory. I know the component is there because my dependency has run. I'm going to copy that binary into my Linux context and package it up as a Linux package, as well as be able to use it to generate the system image. We do the same thing for the R5 version of the FSBL, as well as the PMU firmware. And so roughly speaking, it's just a very, very simple dependency. And you don't have to use it across multi-config, but you can. And so the way that it was implemented, we want to make sure that we set not only the depends. The MC depends, the deploy dura, and the image and make sure they're all over-rideable here. Because if I come in and I say, you know what? I don't want this thing running my FSBL software because I've been given it by my hardware vendor. You buy a commercial off-the-shelf board. They say, nope, you have to use this firmware or I will not support you. I don't want to build it in this heterogeneous system. What I want to do is put it in a known directory location. And then I can say, I don't depend on anything. I don't have a multi-config dependency on anything. But my deployment dura is the magic directory that I put it in. And the name of this magic file is, and then it gives you and then you can enter the specific name and override it. And at that point, your consumer now is generic that it can build it. It can just use a binary. Or it can start you on the process of saying, hey, I don't have anything here because I've generated a error message that says, you need to provide this thing because I don't know how to build it. And so you can start to become more user-centric on this. And so then the rest of it's just the do install. How do I create a package and the deploy? How do I put it into the deploy directory so that when I create a RAM disk image or a firmware image, I can put it together? But let's say we're going to use the Octoproject to actually build it. We have to have that provider. That's the other side of that multi-config. And so the recipe there becomes pretty simple as well. The only coordination that we have to have on both sides is what is the name of this component? So that's where this image name thing comes in. And then we have a do, the standard do configure, do compile. There is no do install in this case because we're not going to create a package within the multi-config side of the world. All we have to do is actually deploy the binary that we generated into a known location. And that's what the do deploy ends up doing. So the provider becomes very, very simple here. So then you end up with a build that looks something like this. And this is a simplified view of a very complex build. And to give you an idea of how complex it can be, if I'm doing a Linux build with the FSBL, the PMU firmware, I've also got an R5 system in there. I could be building in the order of somewhere between 10 and 15,000 tasks in order to do that where the Linux side alone might only be 6,000 tasks. And so it dramatically increases the amount that you're building but it also centralizes it into one place where you can control it. So just walking through this map, I as the automated system says, I want core image minimal. I don't care how you build it. Core image minimal says I depend on all sorts of things. For instance, Linux and Bash. That depends on the tool chain components. The tool chain components depend on JLib C. It all just builds out, I get those components. At some point it's gonna say, I require FSBL. And so it'll go FSBL says, I'm configured for this multi-config. I'm gonna depend on the multi-config. I don't know how to build it, it's not Linux. Multi-config then takes up and just like the image recipe, the firmware says I need a tool chain. Great, tool chain builds. I need new lib. It's going to build new lib. New lib says I need a hardware description library. So it's gonna build libzil. Libzil says I'm configured this way. I need all of these fragments here. So I'm going to build AV buff, ACP mon, can PS, et cetera, et cetera, et cetera. Then I'm going to link those together and I'm then going to make that available as my libzil. And now I can build my FSBL firmware. It's built, passes it back to the Linux side as just a binary. Linux side packages it, it's ready to go. PMU firmware comes in completely different processor again. Same thing goes on, goes through the whole process, builds the tool chain, builds the new lib, configures the libzil. But this time it's configured from the viewpoint of the microblades. Comes back, the final system goes up, builds everything, has all the dependencies in place, part of the core image minimal, then constructs the items, puts them together, generates your firmware image, generates your disk image, puts it out, and now you have a full system build out of the end of that. So what we end up with then is a system that's capable of building heterogeneously. It's capable of building it in multiple ways. But it may not have been the best design. And as the one who designed it, there's problems with it. And so let's talk about the lessons quickly. Realistically, building a multi-config system is really handy, especially if you have a small team that's focused on, this is my system engineers. These are the ones that are capable of system software. They're the ones to put together, the ones that have to support the system together. However, it requires additional machine resources. You suddenly go from a build where you're building one tool chain with one set of user space that constructs it together, and it works on a reasonable, say, 16-core machine with eight gigs of RAM or something like that. And it's not a big deal. Now you've added a second tool chain build and a third tool chain build, and maybe a fourth tool chain build or a fifth tool chain build. And your system gets more and more complex. It takes longer to build. And so one of the problems that we did hit with this design is just the three cases that I've shown here. We over, about double the actual tasks being consumed. We're now building three tool chains. So that basically made the disk space required for the build process itself about two and a half times larger than it was just for the Linux build. The actual build time itself was considerably longer. So we went from hour, hour and a half build to about a two and a half hour build. When something goes wrong in the middle of the build, now the entire system fails. So you have to put that in context here. The counter to that though is that I can now share my multi-config build with my embedded software engineer. And if I share it with them, I can say, you don't have to run CoreImage minimal. You can directly run that FSBL firmware. You can skip all of the Linux side of things and build just this component, make sure your component works, and then as a system engineer, I'll make sure your component ends up in the Linux side the way it's supposed to. That actually shortcut quite a bit of back and forth through some of our debugging and whatever because normally what happens is we do the integration of it, something doesn't work in the integration, then I have to explain back to our firmware team how did I try to build it, how what was my environment like? Now I can actually share the environment and the exact environment I was on. So that's the good part. The downside is it requires more machine resources. The other thing that you may have noticed earlier is that the output of all of those Lopper steps was a binary device tree. Generally speaking, that's great. We want binary device trees because they're already compiled, we don't have to take additional steps to do it. However, as the firmware team is diagnosing things, as the Linux team is trying to figure out why a device tree isn't working properly because they're working on the Lopper stuff, we found a problem and that is when you compile a device tree from the source of the binary, you lose data. It's not a lot of data, but you lose a little bit of the data, a little bit of the formatting, and it can make that debugging process much more difficult. So one of the things that we're working on doing right now is the output of, at least for the Linux side of things, for Lopper is not going to be a DTB, we're actually going to do a DTS. And then during the Linux compilation, we will actually compile it, but that gives the system engineers the ability to say I need to inject a couple more device nodes into this thing or I wanna see what the source is that I started with versus the binary that came out to the end to make sure maybe there isn't a compiler error, that type of thing. So that was something unexpected that we definitely ran into. Another was, of course, using binaries from other build systems is absolutely mandatory in this system. It was part of the original design to be able to do that, but after we went about this, it turns out that the Vavado component that I was talking about actually has its own firmware build engine in it. And as somebody who's always built this stuff from source code, I didn't realize that. And so we need to add a way into the system that during the configuration of the machine and everything else, we can say, no, no, no, I don't ever want you to build the firmware component, I always want you to use the version out of Vavado or the version that my vendor gave me or whatever. And so this is more of a configuration versus an implementation detail because we did implement it to allow it, we just didn't expect that the default for some users is going to be, no, no, I always have to use binaries. The other thing is always generating configuration files can actually be very problematic. For example, every user using the system I have to go to and say, okay, run this command, get your DT processor, run your DT processor, your output's going to be here. Now make the following changes to these three lines because we're debugging something and it's very, very hard to work through. And so instead of configuring in the project, this is under the next steps, what we want to do is we actually want to start to generate these files in a more Yachto project-specific way. And so that means instead of that Cortex-A53's and CompiLinux file that's included from the local.conf, we're actually going to generate a new machine file and that machine file will be specific to that particular piece of hardware that the user's defined. And by having the machine-specific file, now all the users can just say my machine is X, Y, Z, they've given a name, they can share the layer to all the other users in their system and the only person that has to run this DT processor is effectively the system engineering team, not every developer who uses the Yachto project. Same thing goes with the multi-configs because the multi-configs can also be stored in a layer already. The device trees, again, we're generating them on the fly as part of the build configuration. Let's generate them and store them in the layers. Again, makes it easier to share the thing with everybody else. And then we get into this thing that what other things do we need to generate for this stuff to work properly? And this, again, goes back to the movingtomachine.conf. I already know we need to generate better QMU boot flags. I can run QMU through this mechanism right now, but there are certain capabilities that are not enabled properly. And then it's just workflow cleanups. A big chunk of this is that we need to be able to have a better way to not only generate the files, but also share them with other users and things like that. So there's a lot of work to do from a Xilinx AMD perspective. This is in a proof-of-concept type ability today. So we can use this, you can build a system with it, but there's problems. We're hoping that in the next year, worst case, two years, this is going to be the standard workflow and you won't need to use the current components to build the system. Everything will be system-device-tree-based. But we'll see what actually gets implemented based on bugs. And I ended a few minutes early. So if there are any questions, I'm happy to take them. Otherwise, you can see me afterwards. Yeah, so the question is how difficult is it to set up the first time for, say, a systems integration engineer? The initial design was that you could follow, say, a five or six-step process to do this and you'd just be able to do it. Anybody could do it. The reality is it's not that simple, with the way it's currently implemented. It's probably 20, 30 minutes of trying to understand what you're doing and things like that. The goal is we want to get to a point for the Xilinx processors that it really will be a four or five-step process. Get the XSA from Vlado or get the system-device-tree from Vlado, one of the two, and then build your device-tree processing script, run it, and this is really just a setup script. That's the entire point. Store those off into a layer or maybe it even generates the layer. And then now you can share that layer to everybody doing the work as opposed to everybody having to rerun this stuff over and over. Yeah, I didn't mention that, but that's one of the big advantages of the Yocto project is this is all parallelized automatically. We don't have to do anything special to say let's build all the firmwares at the same time or anything like that. Just the fact that that query image minimal depends on the FSBL and the PMU firmware means it's gonna say just go out and build those things. And as long as there's machine resources available and processing tasks available for the system, it will automatically build them. In the case where let's say I'm doing more of a Zen configuration where I've got multiple Linux systems that are building at the same time, if it turns out the compilers are identical, it's not going to build four copies of the compiler. It's gonna build the compiler once and then use that copy four times. And so there's a lot of advantages to the Yocto project around parallelization as well as reuse of components that other systems may decide not to reuse or independent systems wouldn't be able to reuse. So I think that's a good thing. Yes, let me pop back there quick. Yeah, so there are defaults to find inside the consumer and the provider, but the defaults probably won't be used. It's really the items that were generated by that configuration tool. And the idea being that we expect them to always be overridden unless you're doing Yocto project development. In other words, I'm developing the layer or recipe versus the system user. Yeah, it's designed to be overridden from the beginning, but it's designed to be overridden so that they are the same values for all the components themselves. And I think with that, I'm out of time. So thank you everybody else or everybody else. Thank you everybody for attending. If you have any other questions, come and find me afterwards. If you're online and have any questions, you should be able to find me in the Yocto project stuff. So I'm happy to answer questions. Thank you.