 Okay, let's talk about external pre-built binary toolchains in the Yachta project. So my name is Dennis Dmitryenko, and I'm with Texas Instruments, LCPD and ARAGA project. So let's start with some definitions. So what is the external pre-built binary toolchain? So basically it's a cross-compilation toolchain, compiler, assemble, linker, even GPC system library. That is acquired or comes from a third party or someone else in a binary pre-built form. So executable some libraries and target libraries and is not built by the Yachta project from sources, which is the default method. So if you are familiar with the Yachta project, that you know that the toolchain gets built when you start building target content, which is a standard process for several reasons. First of all is being in complete control over the toolchain. So, but we're gonna be talking about how to use third party pre-built toolchain. So why so many qualifiers, external, pre-built, and binary? So that is, you can use either of those, but basically using just one of those may not be very sufficient. So external, which is versus the Yachta project or for that matter, open embedded default method of building one of cells internally as part of the build process. So it comes from external, it comes as external method. Pre-built is obviously not building from sources and binary again, not building from sources. So it's in the binary form. Yeah, and on the title slide, you saw me being part of LCPTO, that is. It's not Liberty City Police Department from Grand Theft Auto, unfortunately. Sometimes it feels like that. So it's exciting and somewhat dangerous. So, but it stands for Linux Core Project Development within Texas Instruments. So that's it with the definitions. So what are the third-party binary toolchains out there? There are a few popular ones that people are familiar with and like to use those. CodeSorcery, Sorcery G++ Lite is a very popular choice these days it's a MetaGraphics Sorcery CodeBench Lite since MetaGraphics are acquired consortia. So there is a URL there to the new URL, updated URL to get the toolchain from. So the Lite edition comes with no support from MetaGraphics, but you can buy their pro version or they have other editions there with commercial support. So if you need that. There is also Linara Toolchain Binary. So Linara works on the toolchain for ARM platforms and they release their sources. So you can obviously build the toolchain from sources but we've been asking them to produce binary releases as well so they started doing that about a year ago. So there is a link there for their toolchain. There are also less known pre-built binary toolchains out there, the Yangstrom toolchain, which is kind of old right now. The toolchains published on that link are from 2011, I think, maybe early 2012. We also have our own toolchain called Drago Toolchain that we build ourselves for our needs, our purposes. It is also a little bit old by now, so it's from late 2011. We don't have a short vanity URL for our toolchain because it's not really meant to be consumed by public. Really it's for our internal consumption but it's there, it's publicly available. It's just not recommended, yes. Have you tried the ARM toolchain? ARM toolchain, but is it GCC-based? No, it's not. No, haven't tried that and here we're talking about how to plug in GCC-based pre-built toolchains into the Yachter flow. I'm not sure if that would be possible using ARM toolchain. I mean, it's not GCC-based, it's completely their own compiler and everything, so it might be too much trouble really to go this way. Yes? So are you going to go through maybe a disadvantage of something? Why do you guys have your own? Yeah, I can mention a couple reasons for having our own toolchain but basically it's not part of this talk. It's basically up to you to decide which toolchain to use. So there are some alternatives there so you can pick and choose. So in our toolchains, for example, are four ARM platforms. So they do support Cortex family of ARM processors, so Cortex-8, A9, A15, and so forth. So they don't really provide support for older ARM platforms. So for that reason, you might need to actually rebuild their toolchain from sources or roll your own. But the reason we made our own toolchain back in the day, we used to use Cortex-8 pre-built toolchain and the light edition, like I said, it doesn't come with any commercial support, which again could be okay for some people, but also it comes with target libraries optimized for older ARM platforms while we needed Cortex-optimized libraries. So they had ARM v4, v5, and v6 optimization in it, but not v7. So we were trying to squeeze every bit of our performance and we needed system libraries to be optimized for Cortex machines. That was one of the reasons we built our own toolchain. There were some other small issues here and there, but basically that's the biggest one. So what is the current existence support and the YAPTA project for using external toolchains? There is a TC mode variable toolchain mode variable that basically points to our include file that sets some preferred providers for the toolchain component. So basically it sets preferred providers for GCC, BNUT hills, compiler ellipse, including Lib-C and GB. So it's basically instructing BitBay to use your own toolchain to prefer your recipe for the external toolchain versus building those from sources. So most of the magic is being done by the eGlypsy package include file, and basically that takes care of packaging eGlypsy pieces, including things like locale and gconf libraries and so on and so forth. So that takes care of a lot of packaging work for the external toolchain. But there is still a need for a recipe there to handle the rest of scrapping and installing these routes and packaging work for the remaining of the toolchain. So that's why you would need to provide either your own external name toolchain or I'll talk about some of the existing external toolchain recipes there in the YAKTO project. So a little bit of a classic history. I've been working on integrating code sorcery toolchain with Classic Open Embedded about five years ago. At the time the support was severely broken in Classic Open Embedded. There was some work in the POKI which at the time was a fork of Classic Open Embedded. And basically I had to fix that up and clean up and enable that in Classic Open Embedded. Eventually it was part of Classic Open Embedded, the support for code sorcery. And since then the support for code sorcery is there in the Open Embedded including the Open Embedded core part of the YAKTO project. So like I was saying, there were some pieces there. I took from POKI, had to fix something. I was working within the ARAGA project which is our own distribution. So I came up with those CSL version variables which have been dynamically generated and picked up from the toolchain. So things like GCC version, GDB version, DC version and things like that. So individual components of the toolchain. So it went into Open Embedded Classic at the time and now it's part of OE4. Now to use that it's very simple. So you set TC mode equals external sorcery and the old name for that was external CSL. So CSL stands for code sorcery light but these days it's metagraphics sorcery toolchain. So and you also set external toolchain variable to point to that external toolchain on your file system. So basically path to CSL. And that's pretty much it. How would you use code sorcery in the Yachter project? Very simple. It is version agnostic so it has... It provides Python code there that goes and dynamically figures out the version of individual components of the toolchain. So basically you can use... This way you can use pretty much any version of code sorcery you get. And also it supports multiple platforms. ARM needs a part of PC. And also it supports multi-leaf. Like I mentioned before it comes with target libraries which is optimized for ARM v4 v5 v6 but unfortunately no v7 which means no optimized Cortex target libraries. Which is okay. You can use one of the older ones, ARM v4 v5 v6 but performance may be slightly lower. And down there I mentioned some names basically Richard was the one originally working on that in Poki so I picked some of his work and ported that to Classic Open Embedded. Then Tom Rini when he was working for MetaGraphics he was maintaining those recipes and now Chris Larsen is doing that. So he is the main guy behind this recipe and the code sorcery is supporting the Yocto project right now. So Linaro Toolchain is not part of the Open Embedded Core main layer but it comes in its own layer that is Yocto compatible. So there is a URL there for MetaLinaro layer. Again MetaLinaro layer it has some other Linaro pieces in there like Linaro Kernel and Linaro Toolchain built from sources but also it provides the packaging recipe for using the external pre-built toolchain. So again very easy to use besides plugging in the new layer to your layer stack in the bblayers.com file you just said TC Mod equals external Linaro and the external toolchain equals part of Linaro Toolchain where you have it on your file system on your host file system. So there was a switch from soft floating point to hard floating point so hardfp. The binaries right now they come in hardfp ABI these days so by default the LT target system is set to ARM Linux GNU EBI HF but basically if you get an older in our bunny toolchain which supports softfp then you would need to change that so it's ARM Linux GNU EBI in that case so HF suffix there simplifies the hardfp support. Again it's version agnostic it provides its own set of pattern functions to populate the LT version there so those are LT version GCC, LT version GDB, LT version G-Lipsy and so on and so forth. So initially Ken Werner worked on this recipe on the support and now it's Marcin with some help from Ken Reich who is our toolchain guy in open embedded community so he's a good help. So as I mentioned we have our own toolchain, our own toolchain and basically I'll be using that as an example of first of all how to use that but also how to create a way of using your own toolchain could be a toolchain you build yourself or you acquire it from someone else but you want to use it within the doctor project so how do you then plug it in. So with this example of using our own toolchain first of all our toolchain recipes are in MetaRugger layer there's a URL there. Again you set TC mode equals external Arrago and basically that points to external Arrago.inc file somewhere in your layer that sets preferred providers like I mentioned several slides back so that's pretty much all you need to do to set preferred providers to point to your toolchain. Also it is version agnostica. We also have those variables, versions but we also extended further with license so I have a patent code that basically dynamically extracts license information from the toolchain. Obviously there is some knowledge there which version of GCC was GPLv2 versus when it became GPLv3 and things like that so basically based on the version so for example GDB version 6.6 was the last GPLv2 release so all the newer versions are GPLv3 so that is when you care about GPLv3 you need to distribute that amount so besides versions there are also license variables there. External toolchain like your seven previous slides or other external toolchains you need to point to where your toolchain is to use it. You can do that as well with Arrago but what we do differently is we expect the toolchain to be already in your path environment variable and we just dynamically locate it and populate external toolchain automatically for you so it doesn't matter where you install it on your file system you don't need to change recipes or configurations as long as you have it in your path it will be found. And now for the scrapping recipe basically package is the target content there is a recipe score met external Arrago toolchain that we will look at. So like I mentioned before all you need to do is include that a glipsy package.in file at the top which will take care of most of the glipsy packaging tasks for you. But you also need to provide some help to that and some additional packaging there. So that is the YAPTA project recipe so if you are familiar you will find it clear to understand but if not I will briefly explain. So provides basically tiles, the big bake and the build environment that this recipe provides those components. So those are most virtual components GCC for the target in our case would be ARM, Linux, GNU, EBI, GCC. So what we are saying is that our recipe provides those components all the toolchain components GCC, binoculars, compiler lips and so on and so forth as well as Lipsy library and all the other Lipsy components there. Linux Lipsy headers as well and gdpserver so on and so forth. Packages is basically it basically tells that the output of this recipe would be those packages and besides glipsy that glipsy package.info takes care of we also want to pack each other target content as well things like libgcc with its headers stdc++ with its headers and kernel headers for the user space as well there. Some glipsy utils and so on and so forth so the list is not complete I just listed the most important packages there so this is the second half of the recipe again it's not complete the recipe is not complete but just showing you the main parts there. So files instruction the files variable basically is told the files that you need to package into glipsy just shown here with glipsy but all the other packages that I mentioned on the previous slide libgcc, stdc++ and so forth they need to have that files variable list and all the actual files that will go into the package so but as you can see there you have slashlib slashlibc star basically all the libc libraries will be picked up libm, ld, ftreds, resolve, librt, libitils and so on and so forth so all the target libraries will be picked up and packaged into glipsy then you said some descriptions package names, package versions licenses for those packages so package versions and licenses are directly being picked up from those automatic variables that we populate with the Python code and the install there is basically telling instructor feedback how to scrape some scrape those toolchain pieces install them into sysrules and eventually package those into binary package IPK or PMDappers and so on and so forth that is part of the project magic behind the scene part of the framework but basically what you're saying is that in the destination area destination directory you make sure that there is bin directory, lib directory and include directory are present the directors are there and then you start installing start copying pieces from the external toolchain from the installation of your toolchain somewhere on your host file system into the sysrules into the door assigned D which is the destination eventually it will be installed in the sysrules so libdior so you copy everything from libdior of the toolchain into sysrules includedior as well and dot dot dot there is basically you keep copying all the new pieces so as you noticed we only take care of glibc the target content here as well as libgcc so libc libraries are part of glibc or eglibc and while libgcc and libstdc++ are part of gcc component but all of them are usually part of the toolchain so here we take care of most of the target content the libraries and header files for the devkit so what are the limitations issues and limitations of that approach the first one is there is this libc dependencies global variable defined there which is being used by several other recipes that basically pulls the libc stack of the system libraries into the build so like I said that variable is being set and being referenced and pulled into several other recipes in the Yachta project excuse me so unfortunately all the existing external name toolchain recipes like external csl or sorcery toolchain external inara toolchain including external araba toolchain they while they do generate eglibc they grab those eglibc libraries from the toolchain and package them unfortunately they do provide glibc not eglibc so a slight change in the name so glibc is more generic name for system libraries there is a virtual libc target defined specifically for that but it's not being used so like I mentioned libc dependencies just lists eglibc specifically there so that is the default setting for libc dependencies and I'll show you what are the ways to solve that issue the other limitation is that by default those external main toolchain recipes they package the target content the libc libraries and headers libgcc and so on and so forth but they do not package the actual binaries a compiler linker assembler debugger and so on and so forth those are being used during the build directly from your installation on the host file system so when you want to produce an SDK for your customers to use for their work for their cross compilation work you also need to provide those binaries as part of your SDK so your SDK in most cases would need to have those binaries as well so the the toolchain binaries so how we solve those limitations on issues yeah first of all this is the exam the no sorry so libc dependencies like I mentioned by default it depends on all the specific eg libc pieces now but all of the external toolchain recipes out there they do provide glibc instead of eg libc so what happens is once you start using external toolchain and you start doing one of those recipes that depend on libc dependencies you'll notice that it will try to pull eg libc into the build process and try to build that from sources even though that your external binary toolchain already provides that so you don't really need to rebuild eg libc from sources so to solve that is we do need either to change all the existing recipes for external toolchains out there to provide not just glibc but eg libc as well so that would satisfy the dependencies or another option is this another variable called tclibc it's similar to the previous one which is tc mode toolchain mode this one is actually sets toolchain libc and again it just points to some include file out there that sets some things so the way I fix that is I actually provided my own external raga toolchain.ing file I said tclibc equals external raga toolchain and it will just pick up that include file and redefine the libc dependency there so the only difference here is that libc dependency lists glibc instead of eg libc so since our external toolchain provides all those glibc pieces so that will shortcut it and not bring eg libc built from sources into the picture now the second problem is packaging SDK so or packaging external toolchain components namely binders into the SDK like I said the problem is that the default external toolchain recipes they package target content libc, libgcc, libcdc++ and there are headers headers go into dash def packages and here is how we configure it to also package the binaries so configure preferred providers for things like gcc cross canadian for architecture so gcc cross canadian arm let's say gcc gdb cross canadian arm and binutils cross canadian arm we just said those preferred providers to point to another recipe that we provide this time we call it external name of your toolchain so and again I'll be showing you example of how I do it with external raga toolchain so in this case the recipe would be external raga SDK toolchain slightly different from the original external from the original recipe that only packages target content so the recipe is slightly long so it's three slides to cover the recipe but basically it's lots of repetition there once you understand the basics then it's very easy so this is the content of external raga SDK toolchain but can be substituted to use your own toolchain that's their name so first of all you heard cross canadian class basically you're saying that the output of your recipes those cross canadian packages so this recipe provides just the basic components and the output is again the same three packages those components packaged into package GCC, GDB and binutils and again translated target arc is basically your architecture in our case its arm could be your own architecture in there so and then we just list all the files for those packages that we generate GCC cross canadian those are GCC pieces so the binaries libraries and sim links like for example prefix target c is bin cpp and bin cc bin cc those are short names sim links into the actual binaries and actual binaries go into the bin door with target prefix in our case ARM Linux, GNUBI, GCC, ARM Linux GNUBI, Gplot++ ARM Linux, GNUBI, CPP and so on so forth we also package the entire libxfdir in there so that's and again that is not complete but packaging all the basics for GCC for GDB it's very simple just the binary and the the entire content of user share GDB for binutils again we package all the binutils pieces and there are linker, assembler read-elf, object copy, object dump NM all those supporting binaries including header files all these scripts and libibority library needed for binutils and the final part of the recipe we set versions again those are automatically licenses and do install is the important piece we start copying those pieces from the location where the tool chain is installed on our file system into the sysroute so and there you can see the CPP with CPP all the binaries into the corresponding directory and sysroute all the libraries and so on so forth so it's it's not complete obviously but that's how you generally do it so few words on how you roll your own binary tool chain so I keep saying that those recipes just give a genetic name external dash name dash tool chain so which means you can use either one of the existing third-party tool chains from metagraphics sorcery tool chain or linear tool chain or hanks from harago there are others but you can also build your own from sources using yukta project so in order to do that you use one of the existing meta tool chain recipes or you extend it the way you need it so but first of all make sure that you configure every thing to use internal tool chain obviously so you want to build it from sources so you don't set those tcmo tclibc and external tool chain path you just rely on how the tool chain is being built normally from sources and tool chain and target libraries as well so the output of that would be SDK or tool chain package into a tar ball for older versions or shell wrapped installer for newer versions there is some work on making it multi-lib so support for multi-lib tool chains it was added after Danny release after the last project 1.3 release so it's not part of Danny it's in the master branch will be part of the next 1.4 release of the after project what it gives you is allows you to have multiple architecture optimized libraries target libraries in the same SDK so you would you would have an SDK or tool chain with target libraries like libc libgcc stdc++ all the others optimized for your architecture so by that the compiler the cross compiler itself knows how to optimize for specific architecture but the library comes pre-compiled for specific architecture so if you want to use that on an older platform you need those libraries to build for an older architecture to be compatible but you will not get the performance optimizations but if you want to use those libraries on a newer architecture a newer version of the same architecture then you would want those to be optimized so I already made examples with ARM so that would be ARM v4 v5 v6 v7 instruction sets optimized for Cortex 8 for example due to pipeline length and other specifics the code would be faster if it was optimized for Cortex but it won't run on older ARM platforms same applies for architecture for example so where x86 is just a generic architecture with instruction set but you can optimize your target libraries for either for i386, i486, 586, 686 and they are they are backward compatible but you cannot run for example 686 with binaries so that the code won't run on 386 for example so that's why you would want multi-lib in your SDK so how to reuse the output of basically once you roll your own toolchain you want to reuse it for after project builds so again very easy you just provide that recipe that packages JLPC target libraries content you set TC mode and TCLPC correspondingly and depending on the director structure of the toolchain the OSDK you may need to adjust that a little bit about reason for that is used to be that host binaries are sitting there in the root directory of the toolchain while target content goes into the target system target prefix subdirectory like ARM, Linux and RBI for example so in new YAKTA produced is basically in target C's root, native C's root versus target C's root they are sitting side by side so you may need to adjust the director structure slightly or either way you either adjust how you produce the toolchain with old director structure or if you use the new one you need to adjust the existing external toolchain recipe for the new structure so again similar to what I already explained so you just use that SDK toolchain recipe to package the actual binaries so refer to those slides, configuration and the actual recipe the toolchain binaries will be packaged in OSDK for the host side of the target libraries and headers will be packaged for the target side of the SDK if done properly so OSDK and no matter if OSDK is just a toolchain or a toolchain plus additional target content it can still be used back later on for further YAKTA builds again so it can be reused and reused again using the same external toolchain recipes so just completely identical so few words about the toolchain less SDK I talked about producing SDKs that contain the toolchain binaries the cross compiler, linker, assemble so on and so forth but there are sometimes you would need to actually release your SDK which should not contain the toolchain so in that case OSDK would be just the target content additional libraries or header files and then the tools needed to do cross development but it won't provide the toolchain so that is sometimes useful the reason for that is you don't want to distribute the toolchain which you acquired from a third party and you want your SDK customers to acquire their own copy of the same toolchain so in order to use the your SDK customers would need to get the toolchain on their own but basically they combine it and they can use the toolchain so it's a slightly difficult case but sometimes it happens that you need to do that so for that you would need to clean up mostly glipc components from the SDK output there it's easy to drop the binaries so it's basically specified I showed you how to actually specify that you want GCC, GDB and BNUT to be part of your SDK output and it's easy to drop them but glipc unfortunately it's not that easy to drop because it's the major dependency there anything you deal for the target will bring glipc pieces so for that you just let it install in the SDK and right at the end of the SDK you clean it up so you just call in this case OPKGCL and remove the list of toolchain target excluded there so you want to list all the glipc libraries in there this code is specific to IPK the OPKG so a popular SDK there is kind of generic name but there are versions for IPRPM and DEP so you would need to provide all three of those so this code was so I came up with that code and committed it to a classic OE back then but it's not part of OE Core as of now so you may need to clean it up and try submitting to OE Core Yachter project now let's talk about Canadian Cross so that is the very generic overview and explains explains what the Canadian Cross is about basically allows you to build the cross compiler and it uses three machines there in the example A, B and C in very generic terms so A machine is basically your development machine and in many cases that would be an x86 based 64B modern Linux distribution there multi-core so you want the beefier build server there so that's machine A is your machine machine B is your customer machine and X86 will use SDK so in many cases you want to target some lower common denominator so let's say slightly older Linux distribution on a 32B x86 machine but it's not limited to that to be x86 the same architecture could be a power PC so you can use x86 as your development machine so customers can use power PC for example for their SDK work and they want to target the embedded system let's say ARM based platform running Yocto project based distribution such as Pocky, Anxrom or Arago and the Canadian cross is the term is slightly confusing but basically in that way the main point is that there are three machines involved in the process so Yocto project supports Canadian cross and there is cross Canadian class being provided that takes care of all the magic behind that so the main configuration point is that you already have a machine where you are able to specify what target platform you are building for so SDK machine is basically you specify what would be the host machine where the SDK would be running on in many cases let's say machine you set it to one of the ARM targets while SDK machine you want to set to I686 machine so the output is GCC, GDB and BNUTils cross Canadian packages for the target architecture in our case would be ARM so cross SDK toolchain is being built during the process so that is the toolchain so again our cross SDK sorry cross Canadian packages they are meant those will be binaries target those will be binaries to be run on I686 32-bit X86 machine to be able to produce ARM output while cross SDK toolchain is the toolchain to be able to build those cross Canadian pieces so that would be 64-bit X86 binary to produce 32-bit X86 binary or like I said another example would be to run on X86 produce PowerPC output while cross Canadian will run on PowerPC and produce ARM so that's a little bit weird situation but in most common scenarios 64-bit X86 32-bit X86 is your embedded platform so native SDK tools those additional tools like libtool, autotools sonsoforce that you may want to ship into packaging to SDK and ship with your SDK again those are binaries for your SDK machine so in this case would be 32-bit X86 binaries and also self-contained binaries that is an interesting topic in the other project so the binaries which are being produced for the SDK they are self-contained I'll talk about that so they are self-contained because they come with all the libraries with all the system libraries they need so basically our SDK comes with its own set of libc libraries and everything else including the dynamic loader and interpreter which is lglinux.so so everything comes as part of the SDK so and all the health headers are preset to use those instead of the ones from the host system a reason for that is we want total control over what has been linked and loaded on the SDK machine on the host we may be building the latest GCC using latest glibc so that needs to be able to work on some older machine that customers might have so and we cannot rely on the libc provided by that machines that customers machine so health headers so pt interpreter section is being set to point to our own dynamic loader interpreter lglinux.so our path and run path are being set there also to be able to load our own libraries including libc but also other libraries as well so now we use dollar origin reactive to do relative dynamic linking so chr path is being used to update those health headers again we don't build it ourselves and we rely on the host system to provide one it's a quite common tool for most pretty much all of the distribution out there the limitation is that it cannot grow health header fields sections so you should have the existing long enough to be able to update it so there is a patch health tool that is a better alternative it can grow fields and sections can basically write health header but it's not common on the host system so you would need to build it from sources and right now it's not implemented as it is seen as the extra build dependency as one of the native packages so now we come to a real locatability problem with those self-contained binaries so like I said in the health header we hard code path to our dynamic loader interpreter which is ldlinux.so that we provide ourselves and also all the other dynamic libraries there so back in Denzil which is one of the two releases of the Yocto project there was no real locatability so SDK path was hard coded into all the binaries so you would build your SDK and assume that it would be installed in some genetic place on the customers system and that would be the only place where you can install it so for example with the Arrago SDK as an output it would be user local Arrago something and you would expect everyone to install that SDK that will chain into the same location for it to work it wouldn't be able to find neither of the dynamic loader or linker so we had our own custom solution to that program so I came up with the ShellSTOP solution there for Arrago when we were using Denzil as a base so basically we rename all the binaries into different names so basically there is dot real suffix being used for actual binaries and we installed those ShellSTOPs with the actual names which would then alter the library path as well as run the real library through the LDLinux.so that we provide so it will call LDLinux.so the name of the binary and pass all the parameters so that way we solve the real accessibility issue so you can actually move your SDK around on the host system and it will work regardless of where it's installed to