 My name is Sean Hudson. I'm an embedded Linux architect for Mentor Graphics. I also represent Mentor as a member of the Yachto Project Advisory Board and a member of the Open Embedded Board as of a month or so ago. Before I jump in here, the challenge for this presentation is that building an embedded Linux distribution from source is a pretty complex task. So the tools that drive that process are also pretty complex and the Yachto Project certainly is a fairly complex set of tools. I also want to take a moment and thank a lot of the people that really this is a complete rip off from. The documentation on the Yachto Project is some of the best that you'll find for an open source project. It's not perfect, but it is actually really, really good. There's a lot of people that contribute to that and we have at least one full time doc and I'm blanking on his name, but Scott Reifenberg, who does a tremendous job in helping to transform engineering drivel into something that's actually pretty useful. Other people that I'm kind of leveraging their work is Chris Halonen, who also works at Montor Graphics and then Kem, who a lot of you know from Kem Raj, who works at Juniper, but has also been very, very instrumental in the success of Open Embedded. So all of that said, the complexity that goes into building an embedded distribution makes it impossible for me to cover all the topics that might be related to the Yachto Project and using that to build one in a 50 minute presentation. So I'm going to try and do a quick survey of some of the basics. In some ways, this is turned into a little bit more introductory than I intended, but we'll kind of try and tune that based on what you guys give me in terms of feedback. I ask that you go ahead and just raise your hand for questions and let me know if you have one, but I may cover something if you jump in too quickly. So we'll try and strike that balance. So initially, I'm going to give you just a very quick overview. Really I'm going to use the quick start guide as a framework that's going to allow us to explore a little bit more about some of the pieces that are inside the Yachto Project. I've done my best to make sure that I get all the way through to the end, but these slides will be posted to the Linux Foundation at the end. After I kind of take a look at some of the big pieces, I'm also going to take a look at a few of the tools that I have found to be fairly handy in finding pieces that I need to work with, and then I will try and at the end talk a little bit about some of the basic how-to pieces that we seem to get a lot of questions about. Again, I can't stress enough that the Yachto Project itself does have a great deal of documentation. I know you get a lot of RTFM from people, but realistically, it's worth your time. So in order, I'm going to go ahead and, as I said, do a quick overview. Take a look at what the initial download gives you and what a build tree looks like. Then I'm going to talk a little bit about some of the concepts, layers, recipes, EVA pens and such, highlight those tools, and then go to the how-to. If we have enough time, I'll try and at least save a little bit of time at the end for Q&A. Unlike Kuhn, who did like 15 minutes of slides and then opened it up for questions, I sort of took the other tack. I think that the level of work that in terms of slide preparation, I probably should have gone his way. So now this is sort of what I consider to be my vanity slide. You may notice me sitting there in the middle. This is from ELCE Barcelona. This is the front page of the Yachto Project. You'll notice the big tagline there. It's not an embedded distribution. It builds one for you or it creates a custom one for you. And then in smaller print, the Yachto Project is an open source collaboration, blah, blah, blah. Okay, so putting it in a different way, the Yachto Project tries to provide basic pieces for building an embedded Linux distribution. These pieces add little value for companies but are necessary to build an end product. In short, the project should allow developers to focus on the features that matter to their customers, and that's at all levels. So I think of four different tiers generally. A platform builder who builds the basic distribution, an application developer who builds applications that fit into that distribution. And their customer would then be an OEM that builds a final product, and their customer, of course, is the person that buys one. So those customers, again, can be internal and external. That's not a hard and fast rule. But again, the focus is you add little value to what you're doing. If you're doing the same thing, everybody else is. So by getting rid of some of the common pieces, you're trying to accelerate what makes your product different. So I get this question fairly frequently. I realize I didn't mute my phone. Why not just use an existing distribution? You can use a Debian, you can use a Fedora. These are certainly valid choices, but you're losing a lot in terms of flexibility. You're very dependent on what's going on upstream, and you're dependent on the cadence of the releases and the developers that are outside of your control. Building from source is going to give you a lot more flexibility and control of your embedded image. And that comes in regardless of whether or not you're using a reference distribution inside the YAKA project, which there are several. Angstrom and Pocky, as the correct pronunciation, are two good examples. However, even those have specific goals that you need to keep in mind. For instance, Angstrom is really focused on enabling hardware hobbyists to get access to their boards and move on. So it's a binary package feed, primarily. Pocky is really more focused in on validating the build system itself. So as a result, it tends to be a lot less stable. There's also commercial ones. This is where I get to plug mentor embedded Linux, which I'm the architect for. Our value add is that we're going to take the YAKA project and add stability to it. So regardless of anything else, building from source is really going to help you enhance your security of your product. And that's, again, at all four of those tiers or the three intermediates. It's going to help you increase the timeliness of your product. So when you have a specific issue or you need to address a specific feature, you can do it right there. It allows you to greatly customize your image size. This has become less important as time has gone on. In early days of embedded, obviously, the image size drove a lot of decisions. These days, the footprints are staggering when you think about it in terms of history and licensing. Licensing is continuing to be a big focus and a big problem for companies. There's, I would describe it as an allergy for a lot of corporations to absorb GPLv3 into their code base. And so tools that help make sure that that doesn't happen are very, very useful. YAKA project incorporates those and you can guarantee that that's correct by looking at the source. So it goes back to being able to build from source. So building your own distro allows you to support those different customers that I talked about, both internal and external, in a very useful way. So that's all sort of the preface. Any questions? So in most organizations, you've got, well, hopefully most organizations, you've got somebody monitoring CVEs and making sure that as exploits are discovered that they're dealt with in your product. If you find an exploit in a binary package that you're getting from an outside source, you're then dependent on that outside source to provide you with the update that addresses that security fix. If on the other hand you have the source package, you get the fix, you can apply it yourself. And in most cases, the patch is already available by the time the CVE is published, or you can go and find and fix your own things from reports, from customers, and maybe you're the one that actually publishes that patch back out. Good question. Anybody else? All right. So, okay, this gets into more of the, lots of screenshots. So the Quick Start, the Yachter project, as I said, has some great documentation. My template on this was to take a look at the Quick Start guide that's out there, which is very much of a recipe, follow these instructions, and you'll end up with something. And try and pull it apart a little bit to use that as a framework to explain some of the pieces of the Yachter project. So, moving on, this is what your initial download will look like. When you, this is, if you can tell from the name of the directory, Danny 8.0, which is, I'm gonna completely sidestep the whole versioning aspect of things, you can ask me about that at the end. This is what the basic download looks like if you extract it into your local system. There's several pieces that are worth in looking at here. Obviously, Bitbake at the top there is critical. That is the engine that drives the compilation process. It interprets the recipes, puts together all the metadata, and actually generates your image at the end. So, the documentation I've already talked about. And then you'll see these meta-directories. And all of these meta-directories are metadata. And they describe how to build things. I'll touch on that a little bit more. You also see a nice, in my slide here, a nice color highlighted script there, which is your starting point. Then, typical read me, and then the scripts is kind of the glue. So, lots of layers that contain all this metadata. But we'll get to that in a minute. So, let's go ahead and run that script and see what happens. So, there's a lot of stuff on the screen here. Essentially, what this boils down to is it's gonna create a directory that has two files in it. You'll notice also down there at the bottom that it gives you a set of targets. So, BitBake is actually built around targets. The definition of a target is generally an image, but it's somewhat more flexible than that. So, here you see a set of stock images that the Yocto project is gonna provide. A good starting point would be a core image minimal, which is just that a minimal image. Not that I know of, though that's a good question. Usually, I just look in the metadata directories. But that's a useful, right, and that's exactly what I do. But there isn't, as far as I know, a good wrapper for that. Although, it would be fairly trivial to write a script to do that. We just, this is one of those things that would be useful to do. Hub essentially gives you a look at that. Because the difference between the command line, which is my focus here, versus a GUI is, of course, one emits data. And the other one is definitely a query response kind of a format. So yeah, the GUI actually highlights images much more easily than the command line. But essentially, this is just a quick run. It doesn't really do that much, though. When you look at it, it's creating, like I said, two files. So this, if you look closely, this is the pokey directory. Below that, it's now created a build directory. Inside that build directory, there's one subdirectory and then two files. And that's all that that script is really doing, except that it's also setting an environment variable. Actually, I think it's two. And updating your path to point to some of the pieces that are inside this tree. For the most part, you don't have to worry about it. It doesn't really affect you initially. Understanding what this is doing later on can help if you, for instance, need to pass variables from your environment into the build system. Knowing that the script helps control that process, then you can take a look at that and tailor it to your own specific needs. So if you run it a second time, it detects the fact that the directory by default is already there. I didn't talk about the fact that there's an argument you can give to this, which will be the name that it'll create. By default, it's creating a build directory. You can call it whatever you want, and it will essentially do the same thing. So in the second run, all this is really doing is just setting the environment up for you. So again, the path is set up. There's something called the BB extra white, I think that's the full name of it, that passes environment variables in. So there's been no change to the tree at this point. Let's go ahead and run a build. And this is just a snapshot. I actually ran a Sato build, and I meant to highlight the, sorry, let's sit right here. So core image minimal is just a very minimal build. Core image Sato is an example UI based off of, I think it's the GNOME UI. So it's an extremely useful package build that touches a lot of, a good image to build because it touches a lot of different packages, a lot of different recipes. I don't generally use most of the other targets, but if you need to, they're there for reference. So running a build, and here's where I inject a little humor. An initial build is dependent on two things here. It's independent on the speed of your machine and the speed of your network connection. If you download, you can pre-download a large chunk of source and that's one of the things that mentor embedded Linux or Mel kind of preloads you for. But it can typically take between one and two hours. It's actually very common for a Sato build. This beast that I have here that I'm presenting with is a work station or work baby server, really. And it takes, I think, about 88 minutes or so for an initial Sato build. So be prepared the first time it takes a substantial amount of time. Good news is that subsequent times take a lot less. So let's take another look then now at what our tree looks like now. You're gonna notice, and I debated with myself about hiding this, but there's two links there. And they're pointing off to another location on my drive. This is an optimization that I've done. And I felt that it was useful enough to go ahead and show. This is what the directory looks like after the build is complete. You'll notice that really there's only three new directories actually created. Downloads, estate cache, and temp. The downloads is where all of your source packages that BitBake has gone out and acquired from the network are stored. If they're there the next time it runs, it will use them. It also does what's called a shared state. So as it compiles packages, it does a hashing algorithm that tries to identify when there's been changes. It's actually very robust at this point, it was introduced in 1.3, right? So 1.3, and it's matured actually quite rapidly. So what I've done here by soft linking is I've made sure that this is shared. By default, the behavior is to create this as a separate directory. It makes it nice and self-contained. But especially for something like source downloads, it doesn't make sense. This is one way to do it. There's other ways to actually put in, yeah. So the local, I wasn't even going to get into the details on local.com so much, but there's ways to configure this behavior without doing this. I just have a nice little script that did this for me as an automation for myself. And then the temp directory, and this is the build output. That's where everything goes that isn't a source cache or shared state. So let's take a look in there that I, this is for me locally. So the shared state is actually reusable. So when you go through- No. No, no, okay, so what, maybe I shouldn't have showed the soft link. This is pointing to a separate directory in my machine. So this is shared amongst my machine. There's actually different levels of sharing that you can do. For instance, you can publish them so that all machines on the network can see that, that involves a little bit more tweaking of your local.com. That's inside that com directory that I showed a little while back, but I don't want to go that far back. So the point here is that when I create a new build structure, I don't want it to always have to go and download the same busybox source file that it did last time. And I don't want it to have to rebuild it if it's the same configuration. So if I'm building the exact same configuration in a new build structure, then the shared state is reusable. And so it's also helpful in the team. It's more useful at a team level. This is just because my personal machine, how I manage it here. But if you've got a whole bunch of people, then it's most useful there. And in fact, again, that's one of those things that we do, it does. And I'll look at it a little bit more. But shared state is actually pulled in and expanded as part of that process. So it's checked. Okay, we can come back to that. So I showed you the temp directory here at the top level. I just showed you two levels deep. There's a whole mess of stuff here. And the thing that kind of look at is there's a couple of directories in here I wanted to look at real quickly. Build stats is a useful build stat information as it would seem to imply. Not going to go into any more detail on that. But there's some useful information there for later on after you've been running builds. Deploy is one that is important to know because this is where your actual images go to. So keep that in mind if you're hunting around, trying to figure out where did my root file system actually end up? Where did my kernel image end up? Here, it's underneath there. Was there another question? No? Okay, package data I found pretty useful because it describes a whole lot of information about the packages. But work is really where the action is. This is where most of the stuff that you would probably be looking at is going to be. This is where source archives get extracted to. This is where shared state actually gets extracted to. This is where logs get generated and also where the scripts that actually get run get put. So this is kind of dialing down what's in the work directory. And you'll notice that these are separated now by target. So you're looking at an all Pocky Linux and I 586 QMU x86 and so on. So some of these are targeted towards the native host environment. Those tools are generally going to be in x86 64 Linux since this is a 64 bit machine. And then the other ones are as they would indicate more for a specific architecture. So how far down do I need to go to find something useful? If you look up there at the top, you're now dialed down about five, six directories and inside that directory is where stuff is really going on. Now this one is for busy box. There's some useful information that you can extract just from looking at this. That it's busy box version 1.20.2. R2 is actually something related to the version of the recipe that was used to build it. And inside this directory you're going to see busy box 1.2.0.2 and that is where the source archive is extracted to. There's deploy RPMs, image, and so on. The packages split is I find pretty interesting because you can actually see how your packages get spread out into installable pieces. I'll talk about that a little bit more in a minute. And the temp directory, again, is of interest because now that's where the logs go, that's where the run scripts go. So I'm flying through this. Are there any other questions before? Okay, so this is inbound versus outbound. The packaging of the source archive coming in is going to depend on whatever the upstream is providing. So TARD and GZ is fairly common. You can have other pieces in here like if it's a Git recipe, then it'll pull straight from there. Outbound, you have choices of what package format you want to use. The default for Pocky and for the Yocto project is RPM, which is a little different from Open Embedded, which uses the O package format. But it's tunable, you can choose. That's a good question. It's kind of jumping ahead, but it's a good time to talk about it since we're here. There's different workflow models. The thing about this directory is that the source is going to get modified if, for instance, the revision changes. This directory gets left behind and then what gets built is the next revision up. So what you can do, and many people do, is build repeatedly out of this directory, get their patches where they like them, and then commit them as patches to another recipe. And then in the process of doing that, they've now captured their changes. There's a bunch of different workflows. That's one, there's other workflows that involve actually having it track the source repository like Git. It requires that the repository has a set of canonical revisions that's trackable. So Git works, Subversion works, Bizarre I think works. I think there's a few of them out there that are a little wacky that don't quite work for that. So it depends on what you wanna do. This is one of those areas that, in my opinion, is poorly addressed today. And we talked actually a little bit about this earlier. This is part of the area that is in the cycle where, okay, great, I've got my platform build. The application builder wants to come in and do their work. One mechanism that you can do, and this is nowhere in this presentation, is build an SDK. An SDK you then give to them, they work against that, they get their source code where they want it, and then they submit it back to the platform builder. The platform builder then writes the recipe and integrates it back in. Yeah, so for kernel work, there's actually sort of even a different workflow, and I don't touch the kernel to save my life. What are the editorials you did? That one's a pretty complex animal. I think most kernel developers work with a Git tree, get it to where they're happy with it, and then worry about integrating it back in. So this is how far down you need to, okay, it is. It is by default. So you can use an external pre-built tool chain if you want. I didn't cover that in this just for because of time. We sell one, so feel free. But yeah, the tool chain by default is actually built when you download the initial one. Okay, so that's actually probably a deeper question than I wanna get into right now. I'll get with you after. I will show you however, in this slide, a little bit about where you can find some of it. So one of the most useful, I think, learning tools and useful features of BitBake initially is dumping the environment. So this kind of gives you an idea of what's gonna happen when you, for instance, do a BitBake for busy box. The dash E just dumps it to standard out. You probably wanna capture that someplace. It essentially gives you the environment that BitBake is gonna run the build in. And the run script, which is located deeper down, is actually the answer to your question, Michael. So I find this very useful, for instance, instead of cd-ing down into the directory structure and trying to find, okay, which directory was it? Was it under all poke here or was it under x86, 64, or where was this actually located? And what you can do is you can do a BitBake of the target and then grep for source. I hate this to be magic, but essentially source is the source directory. And that will give you a fully decorated source directory that gets you to where you wanna look. There's a lot of variables like that. And I have an example recipe that I will try and point out how some of these are generated. But for the most part, this is just probably the best way to look at it. And then this is the working directory the same similar type of thing. In fact, they're gonna be generally from the same root. So these are three different ways that you can use it. It's very instructive. I found it very instructive to just do a dump of one of the simpler recipes using BitBake minus e, and then kinda look through the variables that are set and what they're set to. It helped me a lot with the taxonomy of the tree and the way that pieces were being laid out. It begins to make a little bit more sense. You'll get a feel for what things get inherited by default. And I will touch on that again. How are we doing? Whoa. Okay, I gotta speed up. Okay, so what are layers then? BitBake layers are just basically a way to collect different recipes together. If you do a good job of creating modular recipes, then it makes your reuse and your maintenance a lot easier going forward. I think of it in terms of the typical layer cake. It's just one of those things that it's a way to aggregate pieces together. This is an example of one of the layers that is included with the Yachto project by default. This is the Meta Yachto BSP layer. You'll notice that there's a conf directory and underneath that, and since this is a BSP one, there's a machine directory. I've taken the files out because otherwise it was too big. I don't know if you guys can see that. Is it too small? Does that work? So you'll notice that there's also a standard definition applied here, recipes-bsp-core and so on. Below those are the names of the recipes that are gonna be built, so also state, and then there's, again, dialing down some more information there. Just kinda trying to give you a feel for how these things are laid out. All these recipes are the metadata. So if the layer is just a collection of recipes, then how do we explore them? Given the number of layers and the number of recipes that can go into a build, this is a pretty non-trivial task, so a useful way to try and track these down is to use a tool called BitBakeLayers. Not everybody uses this one. Unfortunately, I just don't know why people don't use it more, but these are the options that you have. It's extremely useful to understand, and I didn't get a good screenshot of this. If I have time, which I don't think I will, I'll show it to you live. It'll show you the layers and show you the priority that they're gonna be applied. It will also allow you to take a look at the recipes that are available from the entire collection of metadata at one point in time. I find that extremely useful when I'm trying to track down a specific piece of functionality, a specific recipe. So if we have all these layers, then they're made of recipes, then what are these recipes? Well, this is just an example. This one happens to be from the Pocky Tiny. That's the init one. There's a lot of fields in here. The best advice I can give you is when you start writing recipes, take a template right from the template. There's a skeleton function out there that will help you out. It's not actually again as widely used as it probably should be. There's some key things to take a look at here. The license is something that you're gonna fail QA checks if you're not providing good information in. This is the license of the source package that you're going to build versus the recipe. That's apparently something that a lot of people get a little confused initially about. The PR stands for package revision. Unfortunately, packages, recipes, and source packages are sort of mung together. There's a definitional problem there that people get a little bit hung up on. So one of the things I can recommend, in fact I think I covered it, is taking a look. There's a blog from Chris Allen on the mentor page that helps talk about some of this terminology and clear things up for people. One of the other issues here, one of the most important things to look at here is that the source URI, this defines where you're going to get your source. In this case, it's trivial. These are all locally attached inside the actual recipe sub-directory. So there's a file named init and then there's an RC local sample. It then blanks out some of these tasks which are do configure, do compile, and do install. These are things that are just, if you write a basic recipe, will happen by default. In this case, this is such a trivial recipe that it was important to override those defaults because the configured didn't really make sense for this. It's essentially copying in an init file. Last thing on here is a files directive. This is actually what defines the files that go into binary packages that are what end up in your root file system. And I'll touch on that. And so then the question is what the heck are packages? And this is where we talk a lot about some of the distinctions here. So if a recipe is the directions that BitBake uses to take a source or a set of source and output something, then a package is what that is, that's what that something is. And this can be a binary package. This can be a set of headers. This can be a set of documentation. In the case of the previous example, it really was just copying one file that was a standard file into the root file system. So one thing to keep in mind, and this one hangs up a lot of people initially, the name of a package is not necessarily the same as a recipe. They're usually related somehow, but there's special cases where they're not alike. So distinguish between a source package, which is an archive you bring in, the recipe name, and then the package that gets put into the root file system. It's important to know multiple packages can and do, and almost all cases come from the same recipe. So I talked a little bit about a binary package, that's the actual compiled output. The standard four are a binary output, a dev output, a debug output, and a doc. And those just go and collect files with a standard wildcard search in the tree and put them into the package format that you chose, either RPM or I package. This is controlled by the files variable. So if you wanted to change, for instance, the way files went into the dev package, you would modify, in this case, the files underscore PN, that's the default binary package there, you would modify files underscore, say dev, and then add in an additional wildcard example. Now this is a little munged because I didn't get all my line breaks correct here. There's actually two files in that files PN there. This is critically important. When you go to add something to an image, you need to add in the package, not the recipe. So for instance, if you want to add in the dev, then you would generally give it that package name and then the underscore dev, or dash dev. This is, again, something just pay attention to. When you first start doing it, it seems a little counterintuitive, but as you do it a little bit, you'll figure out, it's really not that bad, it just takes a fair amount to explain. So what then is a BB append? Well, the recipes are, and I completely forgot to say that, a recipe is actually captured in a .BB file, stands for BitBake. A BB append file is a way to add customizations without completely throwing away the old recipe. This is used with layers, and in particular, this is used to allow for customization that can track against an upstream. So if you have a specific patch you want to apply, but you want to continue to track an upstream package, this is the way to do it. So now you can see how these pieces interact. One thing to keep in mind is that this is the core of the reason why you want to be careful about how you segregate your layers. More often than not, the first thing you're gonna do when you're building your own distro is go and create your own layer for your distro or for your hardware, and a lot of times those are very good to separate one from the other so that you can, again, modify them somewhat independent of each other. BB appends, as the name implies, are really additive. They add something on top of or to an existing recipe. Unfortunately, subtractive operations are difficult, technically. In fact, to the point where some operators will probably never exist, it's been a popular question, we got it yesterday. So, you know, when you get to a point where you really want to pull something out, more often than not, what you're gonna end up doing is overriding the recipe entirely. That means that you're no longer tracking upstream so there's a little bit more of a maintenance cost there, but that's sort of the state of the art as of right now. Okay, so before I kind of close on a little bit, I wanted to introduce something. This is Chris Larson works at Mentor Graphics. He's been, he was one of the core developers for Open Embedded for many years and contributes heavily to BitBake. He's been working on a new tool called BB. This is based off of, I can't remember what the actual model is, but it's a command line driven model where you have a bunch of sub-commands. I think it's called sub-command. It's still kind of alpha, but this is an extremely useful tool. Oops. I would suggest taking a look at this because it gives you the ability to kind of track down dependencies. One of the more common questions that we see is, what is bringing something into my build? Why is it that this particular file is showing up or why is this package showing up? It's difficult to track a specific file to a particular package as it stands right now. That's one of our biggest asks, but certainly at the package level, this is something that will allow you to track dependencies. I kind of flew through that recipe that I threw up there. One of the things that you're gonna document very carefully in that recipe is what it depends on for build and what it depends on for run. Those are two very important pieces that you need to get right or else it's not gonna build correctly or after it's built correctly, it won't run correctly. So those depends and are depends are tracked by the BB tool to show you, and I can't believe I didn't put any of this here. So the subcommands for BB are show, the BitBake metadata, show depends, show provides, show what depends, what provides and what runtime provides. So I don't think I have a good example for you, but this is something that afterwards I can certainly show a couple of you. It's very, very useful to be able to help you track down where things are coming from in your final image. Tim, you can, it's just this one is a better query tool, the package graph. So there's a BitBake minus G and I debated putting that one in here and I actually didn't. That will generate a dependency graph, you can use a visualization to actually view it. But what I found is that when you get anything more than a fairly trivial recipe, it gets somewhat problematic to see what that is because the graph isn't big enough. Now if you had a projection wall like this, then you could track it then maybe. So this one is nice because it does accept some wildcards, like BitBake layers does. And it just, it's an evolving tool that I think is gonna be very useful for people to be able to kind of track things down. The intent is to eventually add in the ability to isolate a file out of the root file system and say, where did this silly thing come from? And if I don't want it, then you know where to go and modify that. Okay, we're running close on time I think, but it is not on git.yachtoproject.org. I didn't put the URL up, did I? It's on GitHub. Look under Kyrgoth, which is Chris Larson's handle, BB. I can give you guys that URL afterwards. K-E-R-G-O-T-H. So it's github.com slash kyrgoth slash BB should get you there. Okay, so we're kind of winding down. I was trying to, sorry about that. I was trying to show something of a useful example. I didn't put everything in here that I meant to. This is an example of trying to track down busybox. I like busybox because everybody knows it. I use BitBake layers to show the recipes that were available in my metadata. You'll notice that I'm sitting in the build directory. That's how it knows what metadata to take a look at. And you can pass in a wildcard. So if you look at it, busy splat there. It's going to, first of all, give me a warning because I'm using 12.04.2, which I think was 0.1. I think 12.04.1 was supported as of Pocky Danny, but it works. You see that it actually parses the recipes. So BitBake goes through initially and it actually builds all of the metadata into a database that it can then query and take a look at. And you notice that it says that there's available recipes and it's g in meta and it's busybox 1.20.2. For those who are paying attention, that's the example that I showed you before. That work directory, it corresponds to this version. So okay, that's great. That tells me that it exists. It tells me a little bit of information about where I can find it, but what about where the output went? And so you'll notice that the next line down, I did that environment command that I showed you already and I did a grep for the source. And now I can actually figure out, okay, look, here's that directory that the output actually is, or this is the source one that the source is coming from. So when it got extracted, it got extracted into this subdirectory, which is that horribly long path there after the S. And then the work directory, you'll notice is basically one directory up from that. So all of that information is then collected in that temp build temp work happens to be in the I586 Pocky Linux busybox directory. So that's where if you wanna look at and work with the busybox recipe, that's where you would go look. So when you're tweaking for your distro, if you decide, for instance, that you wanna play a little bit with busybox and reduce down some of the pieces that busybox provides for size or anything else, this is one of the places that you would go and look. So how do I add something into my image that didn't exist? This is a little tongue-in-cheek, but it's really actually not as tongue-in-cheek as you might think. First thing you gotta do is develop whatever your new application is. If that's, I call it application, but it could apply to just about any module in the system, kernel or otherwise. There's a lot of workflows. This goes to your question earlier. You can do that in the temporary directory. You can do that as a completely different source repository manually populated in. I've personally used quite successfully in a local extraction of the root file system, NFS mounted to my target, and then I can poke my binary in directly and when I'm happy with it, then I commit it to source control and update my recipe. It's kind of a chicken and an egg problem then. If you already have your platform build running, then great. Now you have a way to kind of poke things in. You can either have an SDK or you can build, like I said, with a root file system using NFS, but that sort of presupposes that you already have that. Well, if you don't start with that, then you gotta build that first. So it really depends on what your workflow is. For application developers, if they are wanting to just get started, honestly, I would tell them to start working against their host first until the platform builders at that bottom layer of those four can provide them with an SDK or with just a platform build that's turnkey formed. Once you've got your application, you can create the recipe. I showed you that sample. Use the templates that are provided. There is a skeleton script out there. I would like to point out, though, that one of the things that's really cool about this project is the number of people and active developers that are willing to answer questions. We've got a couple of them in the room. You happen to be sitting in the front, so I'm gonna pick on you, Saul. These guys sort of live and breathe and don't seem to do anything else outside of IRC. So if you have a question, ask it on IRC. If for some reason you can't get an answer, try the mailing list. There's actually sort of a stupid amount of email that goes on the mailing list, but there's also a lot of really good answers to questions and the like. So take a look at the skeleton script. Once you've done that, add that recipe to your layer. Again, I would suggest taking a look at some of the recommendations in the development manual and the reference manual as to how to partition these, but a general rule of thumb is have one for your BSP and one for your applications, or if you have to put another one in for distro, then add that into the image. And I think I actually show as the next one, how do I add that? So in order to add it into the image, you inherit from a specific image. There's again, multiple ways. One of the cool things about it, and one of the worst things about it is you can do just about anything that you want in about six different ways. And there's enough rope to hang yourself. And there's enough rope to hang yourself and there's some subtle differences. Now, hopefully I'm not scaring you with that. The point is if you get a workflow that works for you, stick with it unless you have a reason to change. Identify one that seems to match and move forward. So this one is one that works for me. I inherit from an image and then I add whatever the package, again, package, not recipe, that I want to add to the image to this image install variable. I am just about out of time, so this is how to actually add files, packages, and into a specific recipe. So there's those four default ones. This assumes that you have a working recipe. Add the package names. You can see that in that second bullet. And then for each of those package names, you can add a set of files. And you can see that there's a wildcard specification there. The implication here is everything under foo files will be created. It's a reject expression, so it'll match whatever. I think I'm not gonna have time for a lot of questions, especially not for me, too. Okay, so a few final thoughts. Most of these are common sense. I'm just gonna run through them real quick. Building a distribution from scratch is actually kind of a daunting task. The good news is that the Yachto project gives you a tremendous running start. I would really strongly suggest that you take something that's existing first. So a BSP, the semis that are embarked with the Yachto project are starting to adopt that more and more Intel free scale is also providing some good ones. TI is providing some good ones. This gives you a great place to start from and then you're not starting at zero. So again, it goes back to that whole, don't work a whole lot on the stuff that doesn't add value for your customer. Get comfortable with the process. Dip your toes in. Make sure that you understand the roles and the workflows in your organization. That's very important because you can try and mismatch, you can try and force something that isn't gonna work. Play around and explore. Using that bitbake-e command is extremely useful, like I said, to kind of understand the way pieces play together. These tools are not perfect. No tool is. There's actually some pretty significant gaps, but that's where we can use a lot of help. And honestly, one of the places that I think we really could use the most help, and I'm looking at you, Tim, is from that third layer, from the guys who are trying to build a product to give to a customer. Because we have a lot of OS vendors, I represent one, and Silicon guys on the Yachto project, but not a whole lot of people that represent companies that are really building a product to give to an outside customer. Okay, so that's all I have. I think I'm pretty much out of time, but we maybe have time for, say, two questions.