 Hello, welcome to building bare metal toolchains, cross tool and G in the Octo project. My name is Mark Hatley. I'm a senior software engineer with Xilinx, and I've also been involved with a lot of Yachto project things in the past. So recently, I was tasked with creating a bare metal tool chain for one of our products. And in the past, it had been always done with cross tool and G, or they'd been sourced from another location such as ARM. And what we were finding is that due to limited resources at times, we needed to be able to go and have common bugs, common features between the Linux tool chains, the bare metal tool chains, and even non Linux, non bare metal tool chains. And so the first step in trying to unify all the software was to see about can we move from cross tool and G to a Yachto project type build? And will this actually help us out? So what I'm gonna quickly go over is what cross tool and G is if you don't already know what that is. If you do know, just hold on with me and we'll get into the meat of it a bit after that. What the Yachto project SDK builder is, how this can work with bare metal tool chains. And this is the part that I think is the important part is really my experiences with doing this. How we got into the Yachto project configurations for the bare metal tool chains. And then finally recommendations on if you should even do this and if you should or shouldn't do this, when is the right time to consider a switch? So first off is cross tool and G. This is what a lot of people are using for bare metal tool chains. And just to be clear, when I talk about bare metal tool chains, these are non operating system-specific tool chains primarily used for bootloaders, firmware, things like that. And cross tool and G is an excellent way to get started building bare metal tool chains as well as tool chains even for targeting Linux such as the Raspberry Pi and things like that. Latest version of cross tool and G is version 1.24. It's very easy to use. It has a menu config. So if you've ever configured a Linux kernel you already know what the menu config system looks like and you just invoke it using CT-NG menu config. And then it also has a very large list of example tool chains. And this is great for beginners who are just say, I need an ARM64 tool chain or I need a PowerPC tool chain. They can go and look at the examples and then choose one and then make any minor modifications to that. It's also very, very good at doing reproducible source builds of tool chains so that if you have a problem or you're testing a feature or whatever it's very easy to keep iterating over this and rebuilding tool chains and know that you're getting a very clear rebuild. So the other very nice thing about cross tool and G is because it's really based on the GNU GCC framework everything that it builds is runtime relocatable. That means that if I build my bare metal tool chain for opt bare metal and then my user installs it into opt my project it's still going to work fine without any changes. The one downside of the cross tool and G at least in my experience has been that the binaries become very, very specific to the hosts they were built on. And so what you end up doing is using an engineering practice called lowest common denominator. You find out what operating system your users are going to be using and then building for the one that's most compatible on all of those. And so for instance, if I've got users that are using Red Hat 7, Red Hat 8, Ubuntu, Debian, I would probably do my builds on Red Hat 7 because it's the oldest out of the set. And if I build it there, I know what's going to work on all of the various pieces. The other issue with cross tool and G is it absolutely can build for Sigwin and other host operating systems. But it does require that you already have a Sigwin compiler. It's not going to build one for you as part of that build process. And so just an example of what the configuration looks like again, if you've ever done a Linux kernel configuration you already see these windows and how to operate them. But it's a very simple text configuration. You can use graphic configuration if you want but text configuration is the standard. And the output of this is a series of features. CT underscore some option. So to use pipes, what flags you want to add to the system what architecture you're going to do, et cetera, et cetera. And a lot of these architecture flags are specific to the architecture. There's a multi-lib flag. Do I want to create the one library I'm targeting or do I need to create multiple versions? And so in the work that we've done at Xilinx we have to create multiple versions because we don't know exactly what our customers configuration is going to be. And so we always generate multi-lib tool chains so that the customer can choose the right version for their targeted application. And so when you actually run cross tool and G and start the build process you get windows that look something like this where it starts out and it starts performing various sanity checks, various compilations, everything else. And then it finally gets to the end and it finishes writes out the files and tells you where it wrote out the files. And then from that point on you can just use the tool chain or you can package it up and give it to somebody else that's going to use it for you. So now let's switch over to the Octo project. How is this different? The Octo project is not just focused on tool chains. Tool chains is actually quite a small part of the Octo project, but it's a very necessary part of the Octo project. And so the Octo project itself is really a full distribution build environment. So everything starts with a distribution configuration. You have local project configurations and then you have machine or target configurations. This combination of configurations actually defines what you want to build and how you want to build it. And then it does it through a standard mechanism. The Octo project, while Linux is its main focus, is not actually Linux specific. And so you can absolutely build bare metal, free R TOS, open amp. We've seen other things being built with it that's fairly interesting sometimes. The outputs of the Octo project, primarily are runtime images for users, so disk images, but they don't have to be. And so you can build an SDK, which is the traditional tool chain that CrosstoolNG would be creating. Or you can build what's called an ESDK, which is not only a tool chain, but it's also the full build environment so that you can make modifications to it or the applications that were built with it and then regenerate them. For the purposes of this presentation, I'm really only talking about the SDK because it's really the mapping that matches closest to what CrosstoolNG can do. And so as I said before is the Octo project SDK is that mapping. And there's really a couple of different ways that the Octo project SDK itself can be defined. There's a target-tid SDK. What this is, is I'm going to say, I'm going to build a Linux operating system. I want all of these things in my Linux operating system. Oh, and build me an SDK that you know will allow my users to create applications for that. So you're not really defining the SDK, you're letting the system define it. And then there's a secondary one, which again is going back is really the one I'm talking about which is called a defined SDK. This is really where the SDK is specified component by component by component. I must have Benutils, I must have GCC, I must have New Lib, I must have Lib Gloss, things like that. And then of course the SDKs can be multi-lib enabled. And the Octo project and CrosstoolNG do handle multi-libs a little bit differently. So actually there was a question in the chat. Let me answer that quickly because I think this is a good time to do it. What's an example of modifications you can make with an ESTK? So for bare metal, I'm really not sure if it's a good use case. But for the general case, the ESTK would allow you to say change New Lib and then regenerate your tool chain. It would allow you to change, if you're doing Linux build an application, I wanna make a configuration change to my security modules and then you'd be able to make that configuration change and move on from that. So the Octo project SDK output of it is a self-extracting. This is a .sh file, but it's really a self-extracting shell script. And the nice thing about this is inside of that extracted archive is really an entirely self-contained SDK environment. And when I say self-contained, that means it provides its own GLIB, say its own runtime components, everything required for those applications and libraries to work properly. The SDK can also build cross-compilers for SIGWIN and other environments as needed. So if you're building an SDK to target SIGWIN from your Linux machine, you don't have to provide a SIGWIN compiler. All you have to do is say, I want the output to be SIGWIN. It'll say, oh, I also need to build a SIGWIN compiler. And so those two items itself eliminate one of the downfalls of what I saw in the cross-2.NG, which is I don't have to focus on those common denominator anymore. And I don't have to have a magic SIGWIN compiler somewhere to points across 2.NG yet. However, there is a downside to what the Okta project does compared to cross-2.NG, and that's that automatic relocation. The SDK itself, because of providing all of the environment components, actually has things referenced and configurations linked to the installation location. And so if you change that installation location, it can no longer find the GLIB C version that came with or some of the other time components that it came with. And so the sacrifice is you lose the automatic runtime relocation in the normal Okta project SDK. For a lot of users, this really isn't a big deal, but I do know that for some corporate users, they like to have IT installed in an NFS share, and then the users mount that NFS share in random places. If everybody mounted that NFS share to the same exact path that it was, the original install was on, it would work just fine. It's really only when the path changes. So an example of the Okta project build environment. The first thing that we always do is we run an OANET build.env. This sets up the runtime environment. We can then edit our configuration files under the comf directory, primarily comf slash local.conf, or we can pass some of these things in on the command line. And so in the example here, I'm passing in that I want to build a Xilink standalone distribution, which is something that I defined. I want the machine to be my microblaze tool chain definition, and then I'm gonna build my meta tool chain. And my meta tool chain is that defined SDK. And the results of that is what you see on the window on the right, is I have a whole bunch of things building all at once. And you may not be able to see it on your screen, but there are about 7,000 tasks that it's going to execute. If you compare this against CrosstoolNG, doing exactly the same build, CrosstoolNG does significantly less work. Some of this is because of the way the Okta project is defined. Some of it is actually because of... There are additional safety systems built into the Okta project to prevent contamination between multilibs. And so if that magic Xilink standalone configuration, I pulled out some of the relevant configuration lines, just to kind of show you what they are. The first thing that we want to do is we actually want to define what is this distribution. And from that definition, you can see that it's Xilink standalone. We've got a version number so that we can change that if necessary. We have a target vendor. That way we know that this is tagged specifically for Xilinks. If you're another company, you just put your company name in or if it's a product, you put your product name in. We set up our Lib C to be New Lib because for bare metal, that's generally what you end up using even if you don't actually use New Lib for your calculation. And then we wanted to have a very specific version. SDK version is embedded in the name of the SDK file that is generated. And so we wanted to be very clear that this was called Xilink standalone so that there wasn't any confusion with anything maybe ARM would ship or that we've shipped in the past or whatever. So that's kind of where those variables come from. I also needed to say that we needed to have specific static libraries. By default, the OctaProject generally does not install static libraries into an SDK. And this is primarily for license contamination reasons. On most full operating systems, you don't really want to use static libraries, but that does not hold true for bare metal. So that's where the Lib C dependencies comes in. We also need to clear some normal defaults. I won't go over those too much. That'd be something that if you're really curious why we have to do that, ask me in Slack afterwards or ask on the OctaProject mailing list and I'm happy to help you out and explain this a little bit more in detail. And then finally for MinGW, we actually found an issue with bare metal tool chains that we have to very, very specifically say I have to have the pthreads library in order for this to work from MinGW. And so that's where the very last line comes in. So it's really a workaround for probably a bug and those items, those bug workarounds are gonna go back upstream as I continue to do my work. So the second piece of it then is this microblaze tool chain configuration. How do I actually define a configuration that has all of the multilibs in it that I need? And this configuration is very much extreme. This is not a typical Yachto project thing, but this is something that we needed because for the Xilinx microblaze, I think we have defined 48 multilibs right now, but if you actually do the combinatorial on it, it's well over a hundred that are possible to configure. And that's all because the microblaze is an FPGA dynamic or software processor, software as opposed that it's running on the FPGA. So the first thing that we have to do is we have to set a variable called multilip global variants. This is basically, these are all of the multilib variants that are allowed in the system by default, multilip global variants is very simple and it's just 32 and 64. But that's not really what we're defining in this case. What we're defining is things like MBLE, MBS, BS, MBP, and on and on and on. And then you can see the last one there, which is quite a complex, which is MB microblaze, LE, little indian, M64, 64 bit. MBS, which is the barrel shift. And then it's MF, which is a hardware floating point unit. And I don't even remember what the PD part is at the end. So you can see that there's quite a few things. For each of the configurations, we then need to define exactly what our default tune is. If the default tune is already defined within the Octo project, then I can just set default tune equals and leave it at that. I don't have to set any of the other things. Microblaze though is unique because it is a configurable processor. There really are no default tunes defined. And so I have to manually define each one of these. And in this example, you can see the base one is simply microblaze. And all we're doing is saying, this is a big indian microblaze with no additional features. And that's it. And then we define a little indian version. And then we define a barrel shift version, which is really a big indian with barrel shift. And if you keep looking down the file, then there's a little indian barrel shift and on and on and on. So with the Octo project, what actually changes compared to cross-tool-NG or compared to how we used to build this stuff? One of the biggest things that we found is that cross-tool-NG has a very different set of default arguments for bin utils. And so when I say default arguments, this is configure arguments. This is the way that patches are applied and some things like that. So we had a lot of work to do to reconcile this. It turns out in the end that this really wasn't a cross-tool-NG issue. It's really a whoever defined that configuration for cross-tool-NG. They set a whole bunch of things and the reason they set these things has kind of been lost to history. They either don't work for Xilinx anymore or they were an open source person and it all made sense at the time, but it was never really tracked why a specific option was set. And the Octo project's been very good at reevaluating these options over time and removing ones that don't make sense anymore. And we had exactly the same behavior with GCC. And one of the things that the Octo project typically does was that it builds GCC multiple times, one for each multi-lib. And for our bare metal, we didn't want that behavior. It was for 48 multi-libs, that's insane. And so we actually had to put a patch into the system that builds GCC once, turns on the multi-lib configuration option in GCC itself, but it actually creates all SIM links as the alternative names. That way we preserve the Octo project interfaces, but we only build it once. And it doesn't so much save build time, but it definitely saves disk space. It's significantly smaller. We also had to re-enable some previous options in GCC for the new lib configuration. The default new lib configuration in the Octo project, for whatever reason, isn't compatible with some of the things that we've done in the past. And we needed to make sure that the compatibility was kept. I'm unfortunately not a new lib expert by any means. So I'm going by what other people have told me on this that they basically said, no, no, no, you have to have this option set or it's not gonna work properly. And then finally, the new lib lib gloss, we didn't have to adjust the work or the defaults. We also had to add some code to the Octo project to deal with multi-lib conflicts. This is code that I do intend to submit back to the Octo project. I'm working on updating to the current version of the Octo project right now. And as that work gets further along, I'll actually have patches that can go to the master version of the Octo project. And then there was also an issue where lib gloss new lib just always assumed that because they weren't multi-lib that only one single dependency was fine. But the Octo project has multi-lib dependencies as well. So I had to teach the lib gloss and new lib those multi-lib dependencies. So let's take a quick look at the difference between I'm just based on up to this point. So cross-tooling is very easy to configure. It's functionally limited to tool chains. Operating system dependencies, runtime relocation is great in it. It does exactly what a lot of people want. MinGW is definitely there but it does require that external compiler. On the Octo project side though, we have sample configurations for Linux but bare metal is very limited. So I had to figure most of this out myself. I would not call it easy. For tool chain side, I think that it's, well it's capable of doing a lot more than tool chains for the purpose of this comparison though. They're basically functionally equivalent. The Octo project though does have that host operating separation but at the expense of, then the installed tool chains are no longer runtime relocatable. They're installed time relocatable but not runtime relocatable. And then finally, it will automatically generate anything needed for the MinGW output. So let's get in some further experiences on this. So now transitioning from cross-tooling G to the Octo project SDK, I'll recap why we had to do this. It really was unifying the source code. And a lot of people questioned it when I first brought this up inside the company. Well, why do we have to do this? Bare metal is a very different use case than the Linux use case. And very soon after the project was started, we actually ran into a really good example on why this helps. We had somebody that was trying to debug bare metal tool chains and the debugger hit a certain instruction and failed. And they really didn't know why it failed. They could use the Octo project bare metal tool chain and the instruction did not fail. And it turns out it was a combination of Octo project configuration and a patch that was in the Octo project to fix a Linux bug. And it just so happened that that Linux bug also affected bare metal, but for whatever reason had never been backported to the Xilinx cross-tooling G version of this GDB. And so by creating it, by ensuring that we're using the same source code, generally speaking, we're both bug and feature compatible now. And so if somebody fixes one problem, they're probably fixing it in both systems. And so it really proved the point that we saved a bunch of engineering time by unifying our source code base. But the transition was not painless. We anticipated that it would probably take two, maybe three weeks to do the transition. It took three months. Now, to be clear, this wasn't three months of me working on it constantly, but it was three months of iteration. So it took two to three weeks to do the first version of it. Then I handed off that to the test team and the team doing bare metal and it passed. And then people started to use it and they found that there were certain things that were not working properly. And so we iterated another one and another one and another one. And by the time we had everything stable or better than the previous version that we'd started with, it was about three months of iteration. Now that we've transitioned to the Octo project, I don't expect anything major from a maintenance standpoint or anything that would be more difficult than CrosstoolNG. But I did track down one of the reasons why some of this transition and maintenance was difficult. And that's really because people didn't understand why certain multi-libs were enabled. People didn't understand why certain arguments were set in CrosstoolNG or I guess CrosstoolNG primarily, a little bit in the Octo project. But at least in the Octo project side, I was able to ask people who made those decisions, hey, why is this thing turned on? But on the CrosstoolNG side because it, and I'm not talking about the CrosstoolNG community, just to be clear, I'm talking about the people who did the integration five, six years ago, Antsilix. Some of them were not there anymore. Some of them said, oh, because we were told to do it. And they really didn't understand why those options were set. If the components that I was using were coming from the CrosstoolNG community, I actually expect that their support, their help would have been a lot better. But internally it was very difficult. There's also a belief that because the tool chain worked, the only thing they had to do was upgrade the source code. They didn't upgrade CrosstoolNG. In effect, the version of CrosstoolNG we were using was about two and a half, three years old. And so that added some complications to the configurations. That added some complications to the transition. So these are all the things that you have to be aware of and it's really easy to CrosstoolNG or Yachter project or even your own build systems. To simply say it works, I don't wanna touch it anymore. I'm only using it for bare metal. I don't need the latest tool chain. And then three years later come and you do have to upgrade for a new feature or a bug fix or something. And then find that you don't have what you need in order for it to work properly. So definitely something to be aware of. Then, and again, the initial goal in this for ARM, I mostly talked about microblaze up to this point, but for ARM was simply to make sure that we were compatible. One of the things though that was required for our ARM work is we actually had to have runtime relocation capable system. And so what we ended up doing is creating a script that wrapped all of the executables on the system, found out what the runtime location was of the script and then called the actual binary with all of those environment variables set properly in all the other components. So runtime relocation is possible, but I don't recommend it, it really is a hack. So I've got a link here so you can actually see the full version of our configurations. This is the ARM configuration that I have listed here, but I just wanted to kind of give you an idea of how I'd call it insane our configuration is and why this is probably not the right way to do it long-term, but it was a way for us to do the transition and now we can focus on what do we actually need out of this and bring this down. And so what we started with was a configuration that was based on the work that ARM provided us and had 17 multi-libs defined inside of GCC itself. And you can do a print multi-libs in order to find out what the multi-libs available in any given tool chain are. And I'm not gonna list them all here, but there's a big combination between standard ARM, 32-bit, V5, V6, V7, floating point, no floating point, and then some 32-bit ARM V8, and also some additional custom configurations that we found. And what I went back to my team for is I said, okay, we have 17 things here defined. I don't, and I'm a relatively new person to Xilinx. And I said, I don't think anybody at Xilinx has ever released an ARM V5. Do we need the ARM V5 tool chains? And the answer was nobody knows. So this again is the somebody originally said, well, this works, I'm just going to leave the configuration, I'm not gonna change it. And then ARM provided it to us. And ARM says, well, they still have ARM V5s and sixes and sevens and everything else out there. So from an ARM perspective, they are needed. But from Xilinx perspective, I don't think they actually are. And so we needed to, so the next iteration of this is I'll work with the team to actually start filtering those 17 multi-libs down. My goal is to really get down to about 10 multi-libs. We'll see if that happens. The other piece was that the ARM configuration comes with switches that are simply not applicable to Xilinx products. Primarily things to work around, Arata, things like that. And when I investigated the Arata, it was very clear that they don't apply to Xilinx products or at least they don't apply to anything that's modern. And so I went and I said, can we turn these things off? And after quite a long discussion, the answer is finally, yeah, we can. And it turns out that I wasn't really turning them off. I was refusing to turn them on because the Octa project did not use those Arata switches because as far as the Octa project knows, all modern ARM processors don't have those Aratas. It's really old stuff or pre-production stuff. The next tool chain that came in was the ARM R and M profiles. These are the real-time and microcontroller profiles. So there were 22 multi-libs defined here. And if you look at the data sheets, at least for the products that I was working on, it's really simply an ARM, R term, I remember now it's I think, R5F is the one I needed to target. But why do we have all of the rest of these things listed here? And again, it goes back to the same thing. Do I actually need these? Probably not, but our first goal was simply compatibility. And then we'll work on bringing these down. So my goal is, within the next year, hopefully we'd this 22 multi-libs down probably into five or six. One thing though that we did have to do is we did actually have to define a custom tune called ARM-RM because the Octa project did not already have an ARM-R or M profile configuration, but it had all the rest of them. ARM64, this was easy, very easy. We wanted to have two multi-libs defined, which is basically a 32-bit and a 64-bit, sorry, two 64-bits, a regular 64-bit and a 64-bit ILP32 variant. With the way that our processors are defined and everything else, the big little A72, A53 combination tune was actually the right tune for us. And so this one was just as simple as me saying, yep, this is what we're going to do. Everybody else said, yeah, that makes perfect sense because we only had two multi-libs, it's really easy for them to verify, this is really easy to do. And it doesn't have that 10, 15-year legacy of people having made changes and configurations. So this was the easy one. And then we've got the microblades. And this is where I said before, there's 48 multi-libs defined, but there's actually more than 48 permutations possible in this configuration. Microblades, if you're not aware of that, is functionally similar to a lot of the RISC-5 stuff, in my opinion, where it's a software, primarily a software-defined processor, that you can turn on and off instructions, but those instructions are defined in the compiler. So all you have to do is tell the compiler, I have a barrel shift unit, or I don't have a barrel shift unit. And so that's where it leads to these permutations. And in this case, all of the microblades stuff was Xilinq-specific. Some of it was upstream, but from across to 1G standpoint, a tool chain standpoint, it really is a Xilinq thing. So we move over to the beryl configurations and what we defined as part of our 2020.1 release was this metazilinq standalone layer and then inside of that, we actually have patches. One of the patches was for binutils. We had to disable gold as the linker, we had to disable GPROF shared libraries. We did have to enable link time optimization because some of the microblades systems that we do are very memory limited. We had to enable static configurations as well as multi-libs. For ARM, we had to make sure that enable inner work was defined and then microblades, we had to disable this in it, finny array. And these are things in both the enable inner work and the disable in it finny array, they actually came from the cross to 1G configurations. GCC side, very similar, enable new lib, turn off other things, enable other things. We had to set some defaults. I'm trying to say there's anything important here. I think the one from microblades and this was part of what took us three months to figure out was we missed the disable in it finny array in the GCC configuration. It was defined properly in binutils, but not in GCC. We just missed it. And so what we were finding is that software would run and then when it got to an exit, all of a sudden the software might crash, not always, but it might crash. And it turned out in the end, it was because we had missed that one argument. It took a lot of back and forth looking at the cross to 1G configuration, looking at the octa project configuration and really synchronizing them. And then new lib gloss. This is probably one of the places that this work is probably very Xilinx specific. Xilinx implements our hardware drivers, not as part of libgloss, but in something called libzil, XIL for Xilinx. So we needed to build new lib and libgloss in order to build the tool chain. And then we replace parts of libgloss with libzil later on. But in order to be able to do that replacement, we had to make sure that libgloss and newlib were both compiled with exactly the same options. And the one that was causing a significant problems because, again, I'm not a new lib expert and didn't realize it, was this disable newlib supplied syscalls. What happens is without that set, newlib will supply the basic system syscalls, even though libgloss or something like libgloss is really supposed to supply it. And so you run into a clash where things sometimes won't link, or if they link, the wrong version of the library will actually be used, or wrong version of the function will be used. And so we learned the hard way that it's that disable newlib supplies syscalls. And then finally, that multi-lib configuration is a very simple workaround to add the dependencies. But it's something that I think needs to go back upstream, but until I get everything working on master, I don't have a specific patch to send up because it's possible this stuff has been fixed in master already in the Okta project. So let's take a look at the lessons learned. The more multi-libs, the longer the project parse time is. With, I tried to provide a table here that showed parse time, compilation time, things like that, and compared against cross-tooling. This is a very, very simple configuration that I'm doing a comparison against. If you don't have a simple configuration, the parse time can blow up exponentially, just as a warning. So MicroBlaze, which is by far the very worst configuration with 48 multi-libs, it took eight minutes to parse the simple configuration. And cross-tooling doesn't have that parsing time at all. So there really wasn't a cost there. And the compilation time is a lot higher because it's doing more work to ensure each of the multi-libs is completely separate from each other and then combining them in the end. And so from a time and resource perspective, the Okta project is a lot more expensive than cross-tooling if we're doing something like this. If I turn on other components of the system, which are absolutely not used by any of these configurations, my parse time exploded to 48, 50 minutes. And that was before the compilation started. And this is all on a reasonably fast, modern machine, 32 cores, or sorry, 16 cores, 32 threads with 128 gigs of RAM. So this isn't a small machine. And so you can kind of see that the trade-off in time to build and parse and everything else, you have to be able to justify that by having either a one common YAKTA project interface, a one common user interface, or one common set of source code. If you can't justify that, this work is probably not worth it. So my recommendations here for a quick tool chain, firmer users, BIOS developers, I wanna build, U-boot, all of those things, cross-tooling is far quicker, far easier to use. Just keep that in mind. If anybody has a question, ever asked me, should I use cross-tooling or YAKTA project? First question is what are you doing with it? And I'll probably say cross-tooling unless they have a very specific YAKTA project use case. So one of the other nice things that you can do with the YAKTA project, and we started to look at this, was you can actually use the exact same source code with the YAKTA project, just use the same patches and integrate them. But what we found was between the configuration switches and the desire to have a common set of source code with cross-tooling, sorry, a common set of source code within the YAKTA project, we ended up basically dismissing cross-tooling source code completely. And then we focused on the configuration switches to make sure that they were configured properly. And over time, we're gonna get those configuration which is closer to the YAKTA project than closer to what the previous cross-tooling configurations were. But it's a transition strategy. For the YAKTA project itself, I found that it was easiest if I have to create a Sigwin tool chain. Since Windows is moving towards more of that Linux environment for Windows, hopefully maybe Sigwin won't be necessary much longer. That's my hope. And then if that's the case, then the advantage of the YAKTA project kind of goes away, which I'm perfectly fine with. And then finally, which I've said multiple times, YAKTA project absolutely takes more time and takes more effort, especially if you are not familiar with some of these switches and configurations. But once you have it figured out, it becomes very reproducible and you have a common set of source code and configuration switches with Linux. And so it may simplify your defect handling, your feature fixes, creating new features for your tool chains, integrating those new features and even testing those features. And so it's just something to be aware of that the penalty for the more complex system may be worth it for your use case. I believe that for my use case, it is worth it. But if I was just going to build a tool chain, I would absolutely use cross-tooling for that. So that's kind of where I stand. And with that, I'm open for questions. I've already answered one. Let's take a look at another one. So somebody asked about Vivado Eclipse SDKs using YAKTA project components. I am not involved at all with the Vivado stuff, but our bare metal tool chains that I was creating for this is what Vivado and our Vitus products use. Primarily for building firmwares, just to be clear, not the Linux side, the Linux tool chains are also provided by my group, but they are loaded into the Vivado Vitus and come from the YAKTA project specifically for this. But I don't know much about the rest of it, how it's integrated, sorry. Question, what methods did you use to determine what to enable and disable in the configurations? We use two different methods. One, we actually queried GCC on both sides and used the various print options, the spec file print options, things like that. And we started comparing things. Our initial goal was to identify the configurations and then see what was different between the YAKTA project things. For multi-libs, that was easy, it's print multi-libs. For the configuration stuff, there is a print version that will print some of the configurations for the spec file things. Again, we'll print some of the spec file. We could start there and start looking at that. We also looked at the configuration log from the cross 2-NG builds. And by looking at those logs, we were able to see, oh, it passed the following arguments into configure for binutils or GCC. And then I looked at those same configuration logs on the YAKTA project and did that same comparison. And so back and forth, we started to do that. Richard, good question, I forgot to mention this. He asked which YAKTA project release was all of these things computed against? And this was against the Zeus release, which is the 3.0 release from last fall, the October timeframe release. We are working towards doing master right now, but I do not yet have the tool chain components upgraded to the latest master version of the tool chain. So I don't know if the parsing or the compilation is any faster. I think the parsing will be faster just from other projects stuff that I've done, but I don't have any numbers to actually back that up. It might just be, it's the same. I don't expect to do any slower just to be clear. Next question was, how about Builder, have we used it? I have not used Builder root in many years. And because Xilinx is focused on YAKTA project primarily for our Linux embedded Linux offerings, Petal Linux, I focused on YAKTA project. I would not be surprised that Builder root can do a lot of what I've talked about, especially with bare metal tool chains, but I just don't have any experience there to say either way if it's usable. Next question, describe how the bare metal tool chains can work with OpenAMP. So again, I'm not an OpenAMP expert, or I know what it is. I actually work with people who I have given tool chains to to work on OpenAMP and they said it just worked. As far as debugging goes, that I don't know how they're doing debugging. I'm guessing it's with JTAG, but I really don't know. Most of the bare metal stuff that I have done in the past on both ARM processors and other CPUs has almost always been JTAG based. So I don't know specifics about Xilinx. I've never done any actual bare metal application development here at Xilinx. So I'm not familiar with their tooling. Performance specs on the previous side seem to be personally disk bound. So the machine that I ran those numbers on has two disks in array one. It is spinning media. And so it very well could be disk bound. But what I was finding though, is even if it's disk bound, the amount of work that the Yachta project was doing in comparison with Cross2LNG was significantly more work. So even if the Yachta project was 84 minutes, in fact, I'll go back to that slide. So even if the Yachta project was 84 minutes on microblades and it was disk bound and Cross2LNG was 32, I would not expect even with the fastest RAM disks for the Yachta project stuff to be any faster than about 50 minutes. I've done things in the past with RAM disks compared to hard drives in array one. And I found that RAM is a much bigger deal. And with 128 gigs of RAM, I never got close to a RAM threshold on the system. I watched this the whole time. And I did get load averages in the four and 500 range, but my RAM usage never went above about 80 gigabytes. And so I wasn't bound from a RAM. I was never swapping anything like that. And so I think these numbers are representative. They're obviously not going to be perfect though. Next question, why did I choose multi-lib over multi-config? The plain answer was that the previous configurations and the Vitus and Vado products were expecting a multi-lib configuration with one GCC binary to execute that would then pick the correct library, the correct configuration arguments and just run it. The current integration does not know how the Yachta project environment files work. It does not know how the Yachta project configuration components work. It's focused purely on I wanna run GCC. If I pass in these options, I know the right library is going to get loaded by GCC and linked together. And so that's why multi-lib was used over multi-config. But even in the future, I don't know that I'm necessarily going to change this. Like I said, my main thing is to clean up the multi-libs. 48 is insane, 22 is insane, 17 is insane. There's really no reason for that, many of them. What we need to do is identify that these are the ones that are common. These are the ones that people actually use and bring those numbers down. And not only is that going to make it easier from a compilation standpoint, but it's going to make it easier from a support standpoint. And that's actually what I'm more worried about is the support side of things. If somebody files a bug, do I have to spend an hour to reproduce a binary tool chain just so that I can say, yeah, there's a bug there and then pass it off to a tool chain expert or somebody like that. If I only have to build two multi-libs, I can spend 15, 20 minutes building a tool chain doing the same thing. And then I don't feel like I'm wasting my time. And then of course, the more multi-libs, the more chance you have for bugs in those multi-libs. So it's trade-offs that we have to worry about. For the multi-config stuff, we do plan on using that for multiple operating systems. So depending on if multi-config can be used for SDKs and ESTKs, what we may do is be able to provide a single tool chain that will now work on bare metal and Linux. And then you would choose, I want bare metal tool chain or I want the Linux tool chain, something like that. And so that could be a very good use for the multi-config. But up to this point, everything's been bare metal based because the Linux tool chains were already Yachto project-based and they just worked so we didn't make any changes to them. See any other questions? So that's all the questions I've gotten so far. Does anybody else have any questions? Okay, I don't see any other questions coming in. So I will be on the Slack Embedded Track channel as well as I'm on the Yachto channel as well. You can certainly reach me from it's mark.hatle at colonel.crashing.org if you've got any questions or on the Yachto project IRC, I'm free there, F-R-A-Y. So thank you very much, thank you for attending. Again, any questions, please let me know. Thank you.