 So, hello everybody, I am Kim, I am up again. So today I will be sharing some of experiences that I had for taking Pocky, the reference distribution that comes with Yachto, and use that into, you know, customize basically that into a distro phone, and, you know, various features that are there, they are not the only ones, there is a lot more to it, but I will cover the features that I customized. So this is the reference distro that comes with Yachto is the Pocky build system that Dave also mentioned in the morning. This is the reference distribution that comes with the Yachto project. And there are certain variables that are available, which are, which define the various distro policies. And so first thing you could go is, you could essentially take one of the standard distro configurations that are part of Yachto, essentially Pocky.conf. This is the Pocky reference distro LSB, again, and the tiny is for the small footprint projects. So depending upon your project that you would like to work on, you could pick one of these or more than one of these, and then we will go through what we change to make it our own. So these are the variables that define certain parts or identifying your distribution. So distro is a variable that defines your distro name. Essentially it's a weak define in Pocky distro. Once you take Pocky distro and define your own distro, you can rename it to something of your own. Distro name, again, this appears in various, you know, mod D messages and all those things. And the maintainer, which gets into the package feeds and the metadata that goes into packages. So you can add that, a lot of information to maintainer that's very helpful in pointing your end users to, you know, write mailing list or person or whatever the website you want them to look at. Target vendor, it goes into the middle part, the vendor part of the triplet, the new triplet. So when you build your SDK, for example, and you want to identify that with your distro, you could use a target vendor string. SDK vendor is very similar to target vendor. And distro overrides, essentially, you can add your own override that are particular to your own distro. You might have certain features that are your distro specific that you want some metadata that you define later on to acknowledge. So you would add those new overrides into your distro overrides. And there are much more variables. If you go through that pocket.conf, there are more variables that are defined in there. And I would certainly suggest that, you know, look at that and customize that as per your needs. And essentially, the next thing is adding your own layer. So there is actually a sample file, vblayers.conf. It has the standard layers that are part of Yachto or rather the reference distribution that come with it. So you would just append your layer to this one. So I just call it meta-foo, it could be something different. It doesn't have to start with meta. It can be anything. It's just a name. And you would just append it to there. And this will notify bit by bit to parse this layer for configuration and metadata. So if you have to customize, say, your local.conf, for example, for some reason you don't want to use pre-linker and by default, pocky's distro configurations, it uses pre-link. So you would define your own user class in your own local.conf and you would omit the, for example, image pre-link from the user classes that would avoid the pre-link step when your image is being generated. So you know, some architectures you use, pre-link is not supported or pre-link has certain bugs, for example. That could be not only to pre-link, I just pick an example, but there could be more that kind of situations where there is a feature you really don't want in your distro. So you could define those also in your local.conf sample, essentially when you use pocky's scripts, which is to start with setting up your local.conf and all those things, it will pick your changes from your local.conf.sample. So moving forward, if you want to build multiple BSPs, for example, if you have a PowerPC part and another one is, say, Intel part, so there are a lot of BSPs already available. I just picked one of those which is not default in Yocto when you check out Yocto. So say you want to add a machine that's available in free-scales BSP layer and another one from MetaIntel, which is a collection of all the Intel BSPs, say Crystal Forest, for example. So we just mentioned that you add the layers into bblayers variable, you just add these three layers into your bblayers.conf and it will set up BitBake to look into these layers for additional metadata. This way, yes? So corebase is defined in bitbake.conf, correct? Yes, that's right. So that's why you are modifying the sample file because essentially then this sample file will be used to generate your eventual bblayer.conf. So during that phase, then this will be replaced with the relative path that you have set up your Yocto to be in. So you have your BSPs and without doing any changes, now you can build for these machines using Yocto once you add those layers. For example, if you build for it, you can choose your machine to be Crystal Forest and build standard images that are provided because we haven't yet modified images, so you can build one of the standard images like coreImageMinimal or coreImageSata or any of the LSB images that are out there. So the next thing is you want to tweak the recipe. So you might want to add certain bits but not really overhaul the whole recipe. So you want to probably use it as it is, but there is an additional knit that you want. That's what the bb offense are for generally. So bb offense are patched by bitbake. Essentially, it's parsed. The recipe is parsed and then bb offense are added to it and eventually that makes your complete recipe in the end. So there is a mechanism to bump up the revision to show that you have a bb append and the PR ink, this magic here essentially takes up whatever the, but these things are actually were there when we had PR quite a bit, but now there is PR auto-generated. So essentially, that will bump up the PR by one, whatever the base PR of the recipe was. So likewise, you can do this little tweaks that you want to do in your own layer of course and since your layer is parsed, it will be appended to the main recipe and acted upon by when bitbake parses it. So many a time you want to change certain things like maybe file layout or something like that. You could also override those configuration data like file permissions and so there is a file for creating files and associating permissions for various directories and you can go in there, you can adapt that to your own file structure if you will and you can change those too. So it's very flexible from that point. However, when you override configuration metadata, you have to always maintain that sync. So you update to the next version of Hockey and it has now few additional bits in there and you have to sync your versions of configuration data essentially, but if you have bb append, bb append will just add to whatever changes are done in the proper recipe. So you already get those changes, but if you override it completely, then there is that part that can come by to you. So just be careful. So I think it really depends. Usually it depends on like if, so the question is, is it intended to modify for both SOCs and machines? So essentially it's not tied to any machine or anything. I just took an example. It could be anything and machine I took it as an example, but SOC override, the SOC family override is one override that you could use. Yes, it's say in a practical case you have a change that applies to many machines that are based on same SOC family and you happen to have a SOC override available for those. Then you would go and change and override or use SOC override to do that kind of changes, but it's case by case basis because you're tweaking it, you know best what you're tweaking it for. Yeah, sure it is possible. You could do that like you could write a say you're trying to change a install. So you want to install a particular file additional to whatever is installed. You would just write a do install and append and give it a override of your machine name, whatever machine you want it for. And essentially that will just work for that machine, but be sure that what it would also do is it would change that recipe to be built for say you know you have three different machines now, right? So the recipe becomes machine specific. So now it will be built separately for that machine if it is a common recipe. So it depends like what priority you have. And so BBA pens are applied one after another. They're not essentially they're added one after another. They're not overridden per se. So it's not like you got five different layers doing BBA pens, right? And so the last one is hold or something like that. It will append all of them. So it's just like adding to the end of it, think in that way. The order will be like depending upon like you know how it is parsed, right? So you could use BB mask or something like that to mask, you know, say you don't want certain say recipe support, right? So you would define your BB mask for that. And then it will it will be big will ignore it. No, I think they stack on each other. It should be always like that. Yes, yes. Oh, yeah, I remember. Yeah, that I think that implementation was totally different than what really went upstream into big day. Yes. So basically, Richard is saying same thing saying that you know the BB pens. The question was that there was a case when BB pens, it only took one BB pen. And then you have to specifically include it. That was Montevista's case. So essentially Richard was clarifying that implementation was different. What went upstream always pens. All the BB pens it will find. So SDK. So you know, we talked about workflow and SDK is a little bit essentially SDKs. So there is a standard SDK that's provided in Yachto. So if you just do with make matter dash two chain, it will build a basic SDK C C plus plus SDK for you. And essentially, that's sometimes enough. Sometimes you want to add your own packages to it. So you can customize it through BB pens. And that's what I've essentially done is essentially I wanted to say boost in my SDK for some reason. And I went ahead and you know, we changed say tool chain underscore target underscore task. This is one of the variables. You modify you add the development package to that one and it gets added to the tool chain. So essentially again, the tool chain I have done here is you know, it's adding one or two packages. So essentially, it's okay to go by a BB append because you know, I want to reuse most of what's in meta tool chain and just add few pack few packages on top. You can entirely go ahead and you know, inherit this and then do a bunch of stuff like depending upon your own needs. If you want to add more stuff, you know, you can write a new recipe and include this recipe in there and add stuff if you don't want to do a pens in your own layer. And I've used I've used over writes here because there are certain parts I only wanted for specific architecture, for example. And then I also needed some host tools that are part of the SDK, which is which will be used by the, you know, the end developers to get their job done. So essentially you can easily customize the SDK offering per se. There is a new mechanism also, which is the image SDK. So essentially that's a very good use case. So you build your own image and you want an equivalent SDK that matches your image. So it's many times you have some of your own infrastructure, right? You have your own APIs that you want to share across teams. So, you know, you have one group writing library that wants to be shared across a different group who has nothing to do with that group. So SDK is a common ground where they can share their APIs and other consumers of it can use SDK essentially to use those APIs. So it's a very, very good thing. And so if you have put up an image together, customized image together for your own target, you could just run a populate SDK task. And that essentially will generate an SDK installer that's equivalent to, that will have all the development and libraries and headers that are in the packages that are consisting of your customized image. And yeah, you could, BitBake likes both ways. Yeah. So the question was, should there be a space between minus C and the option itself? And so the answer is you could use either. It's BitBake parses both equally well. So you could use minus C space populate underscore SDK or just one word minus C populate SDK. So next thing, we talked about shared state. And so the shared state is one of the very good features that you could use to your own benefit. For example, you want to build your own images, and you don't want people to be rebuilding all the time. One of the complaints you always hear is like, oh, it takes an hour or two hours to build my image just for one tweak or whatever. So we ended up setting up a shared state. And again, you would go in your own distribution. You will define your shared state mirrors where you want to host them internally up to you. What this will do is this will set up your end customers who are building images to look for shared state here over the network. So that have reduced the build time essentially drastically. And right now, the most time now it spends is in building the image itself. Sucking in all the packages through shared state is pretty fast. The whole process becomes a lot faster. So generating an image becomes like a non-issue for developers. Now, like even the application developers, they don't notice how the image is being built. So essentially, shared state is a good thing if you are, we talked about the layers. There are certain layers. There are certain packages which are not shared state friendly. You can exclude them if you want to. But essentially, they have to be converted into to be shared. Yes, I could. So there is essentially a variable. I have set it up, but I don't want to. I mean, I didn't include it in here because I really don't like it. Because I would like to fix the package itself to be shared state friendly. But that is not a supported model per se. So I think there is a, it's escaping me. There is a variable that's available. You can actually set that to zero and then it will know that this recipe is not shareable. Oh, the bad case. Oh, so you aren't the bad guys? Okay. Yeah. What should you not do writing a recipe so that the recipe will work with the shared state? So he doesn't want to know this particular package is bad, but rather, what should I not do in a recipe so I can use shared state? Yeah, so some things like if you hard code paths, shouldn't hard code paths essentially don't depend upon, say, root fs location or tool chain location. And make them relative if you can. And there are certain packages that are already out there, which hard code paths like library search paths or stuff like that. And that will not, unless you have the same build tree everywhere, this is not going to work because shared state is supposed to work in any top level structure wherever you put it. So, yeah, so taking care of keeping your package relative will get you off on a shared state issues. And so there are like packages, essentially some layers which have certain packages. They already depend on those things like Meta Java, for example, has certain packages. And those packages have complex build system of their own, which depend on those things, right? So, essentially the good thing is they should be fixed because it's wrong anyway to depend upon where you install the package. But yeah, it's good to write those kind of relative paths. So, download mirror. Yeah. So, it appears it's not just dependent on the binary. Right. If you change something very minor, a description, or something that is not really essential. So, this will trigger a different state. I think it will, right? I mean, if you change recipe, right? If you don't change the recipe, then it will be fine. But. Crikey turned the volume right up on it now. Yeah, OK. So, yeah, the rest of the shared state works by hashes. So, it takes some data and it builds up this hash that represents that particular shared state object. And it only puts the information into that hash that pertains to the particular task it represents. So, it's very sensitive to changes. So, if you have a compile function and you change what's in that compile function, it doesn't know whether the output changes or not. So, it assumes that it does and includes that. But if you have some comment in the file, it knows that the comment is relevant, as long as it's not part of the actual compile function. And therefore, it would ignore that. So, it only puts the things that make sense into the hash. The checksum is consists of all of the inputs that go in from the metadata. It doesn't know anything about the binary. Yeah, it works on the inputs, not the outputs. Otherwise, you can't know in advance whether the checks, whether a particular shared state package is relevant or not. It needs to know, without building it, is this object good or bad? So, it has to check some of the inputs, not the outputs. So, we build everything in order to rebuild that file? No, it will trigger. It will, because it's based on the hash. It doesn't matter which. Even if it's set to auto rev, it knows which revision it built. By making a change to the upstream repository, you change it to a new git hash, and therefore, you get a new shared state checksum. Yeah, auto rev will always trigger irrespective. But if you have SRC rev set to a certain hash of your git tree, then if you don't change it, then it knows that source hasn't changed. But with auto rev, it doesn't know. But there are ways you can exclude certain variables that you know for sure doesn't change a shared state. And then you can specify that in the recipe. So, the next thing I was going to go on was customizing the download mirrors. So, sometimes, like, yeah, I think you can host, you're talking about where you're hosting the NFS state. Yes, so the host distro, specific shared state, is stored in a version. So, like, it will know that your, so you built it on, say, you went to 10.4, right? And you put it on a common server. And the other person is, you went to 12.04. He will know that the base was built for 12.04 and not for, say, a different version. So it will know it. I think it does handle. You can put, like, various distributions. So the question was, can it handle shared state for different build distributions on the build system? And yes, it can handle. Like, if you put the right, if you put the right kind of, so if you look into how the shared state is organized, right? So you will have a host specific directory. All the shared state for host specific packages go under that. So you can have it for sent host, you went to, or those. You can actually go a little bit cleverer than that. And you can map things as well. So you can say that a 12.04 can use something from 10.04 because libc is generally backwards compatible. But your 10.04 machine would not use the 12.04. So there's even a way you specify in the shared state mirrors. And we probably don't have a good example in the main repository, but you can set up mappings like that. And I know people who use that. Yes. Just look at the shared state mirrors examples. We've probably, you know, even file a bug request and say, how do I do this? Because I know it is possible. It's been mentioned on the mailing list and probably in some of the commit messages. But I don't know whether it's been well documented. So yeah, just remind us we need to document it. Yeah. Would relieve every in theory. In theory, yes. In practice, the network access time tends to cause the problem. So you'd spend an hour downloading the shared state files rather than doing the build. So you just swap one problem for another. Yeah. And I know, yeah. And I know certainly teams within some of the member organizations actually have NFS shares where this goes on as part of their normal workflow. So it's something that the project, you know, and as Beth said, it's part of the auto-builder cluster. So yeah, it's being actively used. Yeah. And then there are licensing concerns too. Like, you know, some have some processes in place where they cannot get feeds from outside for some reason. So speed is another issue. So like that's what my next slide is. Say you don't want, you know, a bit back hour to access the network outside your, you know, whatever you have defined your secure network. So it has BB no network, one variable. You set it to one. And bit back will look at your mirrors. It will never go out. And then this is how you will define your source mirrors. And you have to cache the whole source mirror into your own mirror internally. And then you can enable BB no network. And you're all set. And one of the problems that you might run into this is if you have auto-rev set on your internal recipes, it might not work because it will say, oh, I don't have any network access and I have auto-rev to look into. And the fetch might fail. So essentially you have to set the source revision to certain get shot to fetch that internally. So which is, you know, the way it's supposed to work. Because if you have auto-rev, then it's essentially looking out into the outside of the network, I think, for the recipe. So given that, it'll basically give, if you have licensing concerns or something like that, that you want to maintain internal copies, you know, you might have some import process at your company, how to bring in packages from outside. And there are always legal escrow and stuff like that. You bring those things in, get them escrowed, and then you can reuse them all the time. So it helps in that process pretty much. So you could put it into your distro configuration. That's your distribution policy. Or you could also put it in your local.conf if you want to. But essentially, you know, putting in your distribution policy makes it a lot clearer. So the next thing is about online package management. We talked about generating images and stuff like that. So this is based on 1.3. With 1.4 it will change because we have changed the front end for the package management. So there are image features that we have. And one of them is package management feature. And if you want to create feeds and stuff like that, what you would do is you would go and add a package management into your image features. And that should bring in all the necessary bits and pieces into your image. That it should be then possible for your image once you install on your device. It should be able to run the package management. So in 1.3 we had zipper as a front end. And this is what it looks like as you can see. I've added a local feed that is internal. So this is a zipper config that I want to adapt. So I define this as my repo file, whatever you call it. And then I add all the zipper needed bits that are needed in there. And then I hook it up into zipper. So the next thing is we will just write the BBA pen for zipper. And zipper will then include your file, essentially your repo description into the image. And there are actually very nice documents on Yachtoviki, how to set up your feeds using RPM and create repo, for example. And that's the other piece of it is that now you need to maintain your feeds onto a feed server. And these are the changes you need on your device that it'll look into your image server. So this is like if you don't want to modify your image at all, and the image should always have a pre-built feed pass in there. But these are also adaptable like you can change those to any other paths if you like on the device itself. So I wanted to cover a little bit on multi-lib. Multi-lib allows us to have 32-bit root file system and 64-bit root file system, for example, together. And for example, say you have two different architectures which are both multi-lib, for example, say ppc64 and x8664. So I'm just covering that case. What I've done here is I have included, actually I've defined multi-lib files which are depending upon default tune is a variable that's specific to an architecture. So what this will do is it will include the right one depending upon the architecture. So remember, this is our distro serving various multiple architecture BSPs. And this include here is instead of require because require will issue an error if it doesn't find the file. So for example, you have a third architecture which is not multi-lib, right? You don't want to penalize that one. So if it doesn't find this file, then it will not error and it will just pass on. So that's actually a subtle difference between include and require keywords of bit-baked recipe syntax. So just took advantage of that. So these are like the multi-lib configuration files that I defined. One of them is, for example, the 64-bit PowerPC. Essentially it defines what the multi-lib libraries are and what is the architecture that your 32-bit counterpart is. So for example, it's a E5500 32-bit for PowerPC and generic x86 for x8664. And you could also define it has to be compatible. So you know the best which 32-bit corresponding multi-lib goes with your architecture. And that's pretty much from user point of view how you would customize multi-lib and use it. The packages that you would want to use start with this like lib32, right? So you have bash, for example, and you want 32-bit bash for some reason. The package would be called lib32-bash and lib32-something. So that's how you would include that into your images and stuff like that, if you have that use case. So the next thing is eglibc. So we have actually used the customization features that I provided by eglibc so it can be tuned. And there are two contrasting cases. If you build Pocky, it builds libc with all kind of features that are in there. And there is Pocky Tiny. And Pocky Tiny enables only few features. And you can see the difference between size of, say, just the libc itself is around 700k. If you build it for Pocky Tiny, if you build it for Pocky, it's about 1.3, 1.4 meg in size. So you can serve into the areas where uclibc used to serve, not so much, but essentially it goes down pretty much to 700k, a libc that supports a lot of software on top. So it's a pretty decent feature. So there are distro features. Libc is the variable that defines. So once you look, yeah. I don't think it will. Like it's per architecture. So essentially you have to have it once for, so those are two different distro if you look at it. So you have profiles based on those. And I think that you have to have separate kind of package features for each state. Well, the configuration that went into that particular build is cached as part of the shared state checksum. So if you've configured this with a particular set of libc features, then it will preserve that in the checksum. And then if you change to a different set of libc features, it wouldn't fall out from the shared state if the checksum didn't match. Yes? I imagine that. Practically. It's designed to keep all of that stuff. Yeah, and handle it. That's one of the issues. You need to have some of how that cache grows. Yeah. But what BitBake does do is it actually, if it's a local based system, if it's got access to the file system that it's on, then it will update the done stamps associated with these. And it will touch those stamps every time it uses them. So it gives you some idea of how actively it's being used. So there are some scripts also available, right? That looks into shared state. That helps you with that. So distro features libc is actually what you can configure for your small system if you want to customize it. And it's described in extended sample configuration, the local.conf file. Take a look at that. And you can find more on what features you need and what you don't. They are pretty invasive features. Some of them are like version management, disabling version management. So in libc, you have three versions of print apps. If you disable that, your size drastically goes down. But your compatibility goes down, too. So you choose what you want, essentially. OK, so another thing is say you can create your own package groups, which could be providing a building blocks for your images. And you can define your own. They are already provided in Yachto metadata. They may not be sufficient for you. And because Yachto metadata provides most common use cases. And there might be certain specific use cases you have. You can bundle them together into package groups. It's pretty handy. If you have multiple images, you want to have certain features, which are certain packages that are building up say IPv6 and IPv4 and kind of things. So you can have all those things in package groups. And then you can just build your images using those package groups. So it's very powerful in that regard. And you can also define conflicting features like SSH server, OpenSSH, or DropBear. These are two features that provide same functionality. And so you could add, depending upon what you want, into your image features. And you can define your own package groups too. If you look under metadata, there are a lot of available examples to customize on. Most of the times, the better way that I figured out was to use what's in the core, and then add on top some of my own package groups that are needed. That probably package is coming from different layers or your own layers, and then thereby building up the different images that you have to build. So by that, I think I'm done.