 Welcome to my talk on using the Yachto Autobilder for build and release management. First of all, I guess I'll tell you a little bit about myself. So I started out and then I designed a schematic in PCB for the Cold Fire 5235, and then I brought it up, worked on U-boot, porting it to our specific board, and Micro-C Linux was our distribution in our build system, and brought up the Linux kernel drivers, used all the good open source tools that we all use. And in the last couple of years, I've been working with the Yachto project tools for one of our new products. So this talk is not based on Door the Explorer, it's actually based on the Door version of Pocky and Open Embedded Core. Just, we've been stuck on this as a lot of us are because of other management priorities, so we do have some technical debt that we have to pay off, and basically it's if we find a bug that's been fixed in the upstream, we pull it back. So build and release management in general. Why do you need something like the Autobilder? It's just because of the complexity of the code. You have your local.comp file, which you have to modify in order to get things working for various recipes, and then you're going to have multiple layers starting with Meta, part of Open Embedded Core, and then you're going to have to start importing, say, like we actually use free scale layers, so we have like four or five of those, and then on top of those you're going to have to make BB appends to customize your layers, or customize the functionality of those upstream recipes. So as you're developing, you're going to have a lot of changes that you're going to have to make, and you might start working on one item and then switch to another one because you find out, oh, I have to work on this. So you're going to have changes scattered around, and I've done it oftentimes before to where I forget about a change and then I don't check it in, and then it becomes missing and then causes a bug. So you need something official to pull cleanly from source and then do a build so you can avoid those kind of errors. So we decided to use Yachto Projects Auto Builder because it was there and it's open source. I think other tools are open source as well, but this seemed relatively easy to use, so we grabbed that. It's based on Buildbot, which is a Python library, which allows you to more easily do continuous integration. Auto Builder itself, the code adds support for layer retrieval, so all the layers that you need. And then there are custom build steps to handle all the things that you have to do as a developer of Open Embedded Core. So the creation of configuration files, building different BitBake targets, and then artifact publication. There are nice build steps created that you can use. This is a screenshot of the primary interface that you'll be using. It's a web interface that controls all of your builds. So the jobs go across the top and then as you click on various links, you get into the job and you can trigger builds and then see log output of, if anything went wrong, like red for that failed build. This is actually a screenshot from last week, so things are always happening on the Auto Builder. And the way that Auto Builder is organized is how Buildbot is kind of organized. So I grabbed this diagram from Buildbot.net, and they like to refer to their web UI and workers as a master and then the workers, but for Yachto Auto Builder, you're going to have a controller and then workers. Now the official Auto Builder, it has one controller and nine workers. That's because they have such a large range of jobs that they have to run nightly. Just because the Yachto project is so big, you have lots of machines and lots of different targets that you need to build for. Whereas for our project, we only have a one-to-one ratio. We have one controller and one worker just because our problem set is so much smaller. So what is a job? A job is the file configuration file that you're going to be creating to run various jobs. The general sections that you have are the title, which is used to refer to the job in the rest of your configuration files. And then you have your repo section, which puts down which layers that you want to retrieve with a repo URL, which is the Git repository where that's located. Or you can use files as well. I've not experimented with any other fetches. And then we've got build steps, which are the actual steps that you would do as a developer to build your images. And then a scheduler is the last thing you need to make for a job. And that is going to be usually a time schedule, like a nightly build, or it can be something based off of monitoring a Git repository. And then going back to the official auto builder, they have 23 jobs just looking at the official website. And the ones in yellow are generally images. And then that includes the different machine targets, and then they do Debian packages, iPackage or RPM. And then they have some x86 in there as well. And then the tiny target, the stuff in blue is going to be some QA jobs. And I just took a look at a few of these, like the log rotate is building a base image, adding log rotate to it and running some QA checks on that. And then in red are the build tools that get built on the auto builder. They're just like the Eclipse plug-in and probably the SDK package. I'm not entirely sure, because I didn't delve into that. Question? Okay, the question is, does everybody use the official auto builder or do you set up your own? And the answer is usually you're going to set up your own. This URL that you can go to is just the Yachto project's official auto builder. And then once you set it up on your local server, it'll be available for you to control. So for my project's jobs, we have a much smaller subset of that. One point is that for the creation of our jobs, you definitely go back to the example configuration files for the jobs that are within the Yachto auto builder git repository. The first thing we have is a nightly job, which mirrors the nightly job for the official. And that just does a bit bake-c fetch all so it downloads all the sources. It actually doesn't do too much for us since we're not adding too many different recipes frequently in our development. So I don't even think we download too much nightly. And then I added the QEMU x86 job as a sanity check, just taking our images and then building it on that machine. And then we have two more jobs for a master branch, which is our main development branch. And then additionally, we have a stable branch, which doesn't change as much and it's used for releases. We have another job to build that. And then we have a new job recently, which builds a single RPM, which is used by one of our product teams. So layer repositories. Going back to the repos section of the job configuration file, you're going to have two types of layer repositories. The first one is going to be your upstream repositories, which are like Pocky and Meta FSL ARM. Those are usually going to be a fixed source revision. You're going to pick a release from those projects and then fix them for your build and then you're going to develop your layers on top of that using the OECore layering system. And the ones in blue represent our local layers. And since those are changing a lot more often, we have those generating, those are the ones that result in the most change of our nightly builds. So you see that the first one is version 1.0. So basically what happens is the auto builder builds from the head of our code and then taking those upstream repositories, take the artifacts and then if we want to use them for release, we tack on our version number on top of that. And then as our code progresses, we'll get more builds. And I'm actually like in this example, the first one is February 10th and then a couple of commits later. Version 1.1 is actually February 15th and so forth. So translating those Git diagrams into our configuration file, we have the first one for meta FSL arm and that is we actually specify the source ref hash for that and that's our fixed version. There's like for certain projects, they use a branch like you could say, like if you're using Pocky, you could say the door branch. But you actually want to control it more tightly. So for all of our upstream layers, we use the hash and then our in-house layers are using the master branch because they're always moving. So when you specify master, it just goes to the Git repository, your local Git repository and then grabs the head. Now kind of as a side note, I mean it's part of the release process that we have some checkout scripts to check out our layers for our developers because it's just easy for them to use a script to check everything out and it matches the repo section of the auto builder job and then it does the same thing with the fixed upstream and then the local heads of the master and another script that we use is a release script. So that gets generated whenever a version is released and that is going to specify the source rev, like the specific source rev that was used in each build. So if you found a bug within your image, your released image, you could use that script to check out the set of layers and then you could go back and debug the build if necessary. So now I'm going to start talking about some of the custom build steps that we created and the first one is one that we use for our release process. We have... Well, the generation of that release script happens automatically with the build step. It actually uses the output from the published layer tar balls. So that's a directory and then it has the name of the layers and then the source rev. So I had that for a while but wasn't really using it but once I realized that I needed to generate this release script, I basically just used the file names and converted those to generate the build script. And then whenever you do a release, you're going to need to do certain actions like create tags in your Git repositories for your layers so that you can go back in reference when you're analyzing your repo history for bugs and then commit the release script. And also we have a recipe specifically for tracking the version of our layers. I mean, not layers of our images. So there's a little bit of automation there as well where we bump that image version for our release candidates. And let's see. Right. And then these actions actually occur within the work directory for the specific build job. So you have, there's actually a copy of Git repositories there that you can use. And then the commits for the release script and then the image version bump up there. And then we also tag our local layers so that when you actually perform the release, you grab this particular artifact of your layer repositories and then if you accept your auto-builder output, you actually perform the Git pushes locally to your actual local repositories. So along the idea of trying to control more things within source control is the template comp variable. First of all, one of the first things you do when you set up your build environment for Pocky, open embedded, is you have to source this OE init build ENV script appropriately named. And it usually, it just pulls some default files from, I believe it's meta-yachto and then puts them into your build directory. Now what the template comp lets you do is specify a different directory within your layers and that's nice because of the controlling things within source control. So you specify that as an environment variable when you're sourcing that script and it'll pull from that directory and then install your own custom local.comp sample and then your bblayers.comp sample as well in your comp directory. Now in order to be able to use that within your auto-builder job you're going to use something called run preamble and if you look at the Python code for this build step all it does is it calls its sources that build ENV setup script. So initially, so what I did because like I said I'm working on Dora is that I created a second run preamble build step just to add the template comp directory and like last week I found out that in later branches they've added a alt command property that you can put into or that you can specify as an argument for your build step and then if you use that you could specify your template comp in there and then just as a side when you're setting up your configuration files there's a build step to create an auto.comp configuration file which in the order of precedence puts it before the local.comp so if there's anything that you need to do specifically for your auto-builder job use that build step and then just put more entries for your pocket configuration in there like if you wanted to override a package version or something you just put it into auto.comp so another custom build step that you might need to do is publish artifacts first of all if you look at this build step code there's a big if statement like you specify artifacts and then the if statement will go through and take various actions when you're publishing your build artifacts but the majority of them within this exact build step are for the Yachto project so when you're creating your own custom one there's a lot of clutter so another thing that the default publish artifacts does is it copies everything within the deploy images directory so that includes like an image name and then it'll have a symbolic link to the actual image and then it'll have a build stamp so when you're building that's for if you're building multiple images in one instance so you can keep track and go back if you needed to but you don't really need that for a release so I wrote my own publish artifacts syntax and we are just using four artifacts the u-boot binary device tree Linux kernel and then a specific file system image format so I added some coding tips for when you're working with build steps it's like one thing that happens is that you need to map stuff that you set up in your environment so a lot of the configuration with an auto builder occurs with environment variables that are available to the to the servers and those are written in the file autobuilder.conf you set environment variables in there and then there's buildset.py which does the exact it does the instantiation of the build steps as objects and it will actually read stuff from the environment set a python variable and then use it in each build steps initialization and then whatever action occurs is where you use those variables now one thing that that you might run into if you're searching your code is that there is a second space for your environment variables configuration it's yachto autobuilder setup and that's all placeholders so I wasted a little bit of time setting variables in there and seeing nothing happen so it's a nice tip another part of creating your build steps is that a lot of the functionality comes from shell commands so within the build step you construct a command string which is just the shell commands and you append to it and it can get pretty big and if you go to when you go to the autobuilder log output for this exact build step you're gonna get a really long string that goes way off the screen and then you have to scroll over it's just not good for when you're trying to see if the command string that you created was same so what I found out is that you can replace those semicolons to the line continuation with just carriage returns and it just it makes the output a lot better so eventually I got some complicated if statements and since those are going way off the screen and those are impossible to read if they're not indented so adding the carriage returns was perfect for that then moving away from build steps a little bit to using premieres so within Pocky open embedded core you can use something that's called a premier to speed up your builds and optimize space a premier is of your estate cache directory and then your downloads directory so just like in the diagram your premier is read by your developer build and what happens is as the developer is building their BitBake instance will generate a hash and then it will go to the premier first and then see if there's an artifact within the estate cache which is available and if there is the developer's instance of BitBake will create a symbolic link within its local estate cache directory and then whenever the build process needs to grab it it'll just go there so it's a little bit more space efficient and then the downloads directory is treated the same way with looking for the blob within the premier and then creating a symbolic link if it's available and then if anything needs to be built separately like if there's not a cache hit on that it'll just populate the estate cache the local estate cache or downloads directory just as BitBake normally does so if you're going to use this within your auto builder you actually all you do is you set the estate cache value and the dldir value there and the instance of BitBake that's run by the auto builder will just use the premier and then it'll be the only one writing to that it's like another thing that happens when you're running an auto builder instance is that your disk space gets chewed up as you're building nightly or whatever frequency so I had to create a cron job to clean it it's pretty simple it just goes to the publish artifacts directory looks for anything that's older than 5 days or whatever time period you want and then it executes the very safe command of rm-rf so you don't want to run this as root bad idea so another thing that you can do is you might need to do is run the estate cache management script it's something that's available within the core and it just I believe it just reads your estate cache directory so you would run it you would run it from the same environment as your auto builder and then it would just go to the premier directory and then clean that out if required I actually haven't had to run it too often because lately our code delta has been pretty small another thing that I ran into is we have a hybrid windows and linux environment so I tried a new thing where I was publishing things directly into our windows share which is where our official releases occur windows does not support symbolic links so each of your 500 megabyte image files gets duplicated in there that doesn't make your IT people happy just as another thing for your actual configuration storage I just created another directory within the yachto auto builder source tree build set config syntax and then put all of our jobs there so we have our own copy of yachto auto builder get repository we have our own branch and then another thing you have to set up autobilder.cfg that includes your login credentials for your web GUI accounts so you don't want to put that on to github I believe there were some security holes with people publishing their secret credentials on there right and then of course your autobilder.conf containing your configuration also we also put that into our local yachto auto builder repository and the tips and tricks section right going back to the that one job which is the rpm build for one of our teams our nightly build our nightly image builds didn't happen frequently enough for them and there was a bit of overhead with the actual release process so they figured out that there was this scheduler called git polar and what it does is it's one that monitors it's the one that monitors a git repository looks for any changes and then if there are there's like a cooling off period like 60 seconds or something and then once that occurs it will trigger a build so this job at its core it just does bit bake and then the recipe for their project and that's actually one of the items that you would put into the autobilder.conf configuration file so this is actually something that's documented in the project manual it's like in a configuration file you can override the source rev it's like ordinarily for a recipe and a version you would fix a source rev at that git hash and what you do in your autobilder.conf for this job is you override that with autorev which will tell tell the bit bake fetcher to grab the head of the branch that's specified and then you also need to override the package version so that it has this nice git string and then part of the hash so you can identify it one thing that's bitten me a couple of times with appending stuff within that top level configuration file is the order of append if you use this append flag right here it's like I always I always forget that you can't put it at the end it's like every single time it's like plugging in a usb cable you're always going to get it wrong the first time so just as a nice tip the append will go first before any package name specifications it's like another thing that you want to use with the autobilder is the build history that's something that you can put into your autodocconf and what it does is it has the image recipe inherit a bb class and when an image is built it's going to put a bunch of metadata into a git repository that you specify and it tracks it'll output every single file that's in the file system it's permissions and owner and group and it will also output the data about each so the versions the files contained within and what you can do is each time well each time you build or if you use tag if you tag that output with the version number of your releases you can make a comparison between say your nightly builds or if you want it to go from released version to released version and you can see if anything has changed and that so if a file disappears that you weren't intending or really something that's supposed to be really big it's really small you can use that to try to identify bugs before they get to the field or identify them early and now to the section about future tasks one thing that happens frequently with well not too frequently but often enough with our autobilder is that you get external layer outages so a lot of your upstream layers come from github git git.freescale.com or the octo project itself and you can run into network problems like say the some kind of botnet attacks dyn which provides dns for github that happens earlier this year and that day I was trying to build something with the autobilder and then it just didn't work it couldn't reach github and then of course you might have local IT issues as well so there is a feature within the autobilder which is to use a mirror for the external layers it's not super well documented and it is a little bit of a work in progress like I was just trying to get it done before the conference so the one variable that you set is the ogit mirror directory and that gets passed through into the build steps and the checkout layers build step will actually try to use it as a mirror I did have to add some code to resolve layers the resolve layers build step which for any layer repository the branch based it will try to go and retrieve the source rev and I also had to pass that environment variable into the build step when it instantiated resolve layers I did hack checkout layers a little bit and then so I actually changed the behavior of the underlying checkout like this is very specific to the code so I don't know how useful it is to you guys but right before I left I think I figured out that in your checkout layers you're going to want to use method fresh and mode full once you get into the code there's not any documentation on that on the checkout layers mode step specifically but that's what I think will work so far and I think that if it if you do specify this mode it will always go to the mirror directory if it's available I haven't fully figured out what the best way to do this yet is and combining repos and the checkout scripts this is something that goes back to the idea of trying to control things within source control so once you have your auto builder job and your checkout scripts they they actually have contained the same metadata and that's not good because if you make a change in one place it might not it doesn't go automatically to the other place so then you're bound to run into bugs I definitely introduce some bugs by doing that so one outstanding problem is trying to convert between the two is just using that auto builder configuration file to as the actual source for the checkout script I do know that this problem is well BitBank provides the downloads directory for build source retrieval so it might be nice to include those libraries but maybe you could also use the BitBank's mirror code somehow I know there's other work for to try to automate layer retrieval as well because it's such a big problem another thing that auto builder is good for is the PR service that's actually something that you use for to help you track your package anytime there's a change in your recipe it's going to automatically bump a revision number like the last number in your package version and the documentation for this specifies that whatever your official package feed needs to use the service so that your released packages can be consistent auto builder is perfect for that you have to start the BitBank PR serve as a service and then pop in this value PR serve host into your auto.com file there are a couple of more variables which I have not added to this slide but they're within the mega manual and then another thing that I'm going to have to do whenever I implement this is I'm going to use this BitBank PR server tool to export that database so that it can be backed up in case our build server goes poof and the last thing that I wanted to get done sometime is implement automated run time testing the Yachto project uses this extensively and what it is is you create you create tests within your meta layer or your layer and then you add those to your build configuration file and then you actually run a specific target called test image for your image that gets built and then what it does is it will spin up an instance of QEMU and then with your image and then it will issue commands via SSH to that instance and then you basically write something just to check and see if the last change that you made like if you develop a feature you go and you write some tests and shell script and then it runs them and if it passes or fails the Bitbaker will report that and that it comes out in a nice report so it goes back to that first slide with all those jobs with all those QA jobs so this would be perfect for continuous integration and try to cut down on the bugs in your images so that's pretty much it thanks for coming are there any questions I was able to I worked on a couple, like I found out about the automated tests so then as I was working on some features I did write some tests and then the tests worked pretty well but I just haven't had a chance to put them into the auto builder infrastructure so like I have my own layer called meta-syntec test of all things that has those it has a couple of tests but I just haven't had a chance to fully integrate it into auto builder yeah it's just a check out it's just a shell script it just checks out it clones the layers and then uses some gate commands to pick the specific revisions any other questions so the question is what's the general purpose of the auto builder it so maybe the question is more why the auto builder versus other tools like Jenkins or it was just it was something that was in the Yachto project so I just wanted I just decided to grab that because also we didn't have continuous integration in any other groups in our department so I was kind of the first person to do it since we didn't have anything I was like oh auto builder seems okay so I just grabbed that and started using it that was another pro so Darcy said that there's also the integration with BitBake and then the Yachto project tools was another benefit so I didn't have to write more glue code for doing the actual build steps like the configuration file generation and publishing the artifacts some of that was already there and a template to work off of as well so the statement is that it would be nice if there was more integration between the Yachto auto builder and then the other continuous integration tools that are that I guess corporate entities are more aware of which which makes sense I'm actually not a developer for the Yachto auto builder so I just use it but there are some people in the room who actually work on it do you have any other questions I should give you this microphone I was told to give it to question askers so the diagram of people that have Yachto auto builder and other auto builder experience might not be overlapping but can you we use Jenkins and quite successfully any advantages that you would see the Yachto over Jenkins so yeah that's a common question basically obviously do the setup that works for you if the community finds that the work that's been done on auto builder which is essentially just a layer on top of build bot if the community looks at that and says this is a value and we can see ways that we can use this to our advantage contact us and we can move stuff forward the reason we use it is very is very internally I want to say greedy but like it's for our own purposes so we have our own tests that are like baked into auto builder that it would take a lot of work for us to move that to a different system so we're currently using it to our benefit but if you look at the code and say you know we could use this at our corporation it just needs to have this feature in this feature for example somebody mentioned plugins which is definitely something that we'd be interested in so the more people from the community that speak up on the mailing list or just in general and say here's what we'd love to see in this and this tool looks like it's promising the more work we can do on it so as of today to answer your question yes there's a lot of plans that we could go forward and do that sort of thing add plugins and make it more useful for the community but I don't think I don't think that the use cases we have for it are going to be shared with everybody else so we just need to know which direction we need to go alright everybody I'm told that there's only one minute left so I guess that's the end of my talk and thanks for coming