 Okay, just in case this is all a big mistake, Open Embedded builds Linux distributions for embedded containers, whatever. The Yocto Project is a Linux Foundation project that supports Open Embedded. This bof is about building Linux distributions. If you are not here for that, you can leave now and we won't. All right. Okay, the last time I had a bof, I had a fellow interrupt me and explain that I needed to cover all this stuff early on. If you want to talk to us after this, we're on IRC on Poundoe and Pound Yocto. We have mailing lists that you can find at the Yocto Project website. And we're looking, we know a lot of people use the Yocto Project, but we don't know who you are. So if you go to Yocto Projects.org slash users, it will steer you to a wiki where you can sign up and add your company products, pictures of products, links to products are great. I just saw Zephyr tweeted a slide with all the stuff that Zephyr runs in. And it would be fun to have a slide with people who will let us say, they run Yocto Project built images in their product. So that's my plug for that. And now we'll have the updates from Nico. Okay, so hi everyone. Yeah, a couple of updates. I mean, this is the bof, right? So we are just going to talk for a few minutes and then this is this weird moment where we are going to ask you for questions and we'll try to find people to answer them. So we'll see. So let's get started with a few updates. I mean, the usual updates about the members. So we've got some new members that sign up in, I mean, since, not since last time we've seen you because it's been a while, but recently. So BMW and Axis. One thing we've been doing lately and mostly because we haven't been able to meet in person is we've started this Yocto Project summits. We started with the first event and we've been surprised that it has been extremely successful. So we've done a third one and a third one and a fourth one. And it's actually always been a very successful event. We've realized that by actually going virtual, we managed to reach out to way more people that we have been able to reach in the past. And that is exactly, as Philipp was saying, one of the problem we have, we know the core developers, we know who are the people who have been talking to us for the last 10, 15 years, but we don't know who is actually using the project, who is actually struggling with using the project and everything. So this summit is that. That gives us, I mean, this reach to everyone out there. It's actually very inexpensive and very easy to join. So the last one was in May. It was a good event. We have nine hours of video. So if you don't know what to do this weekend, you know what to do now. The next one is already announced. So it will be November, at the end of November. And yes, so just make sure that you plan to attend. Follow, I mean, we are going to announce, I mean, how to registers. We are going to request, I mean, people for speakers and we are going to look for speakers and everything as usual. This is something which is happening right now. As usual, there will be also developer meeting right after the Yachter summit for open embedded, which is a day which is slightly less formal than the summit where we can talk about anything. I mean, all the people using, developing, willing to develop open embedded technologies just can meet and talk about anything. And soon we'll have a wiki page set up for the developers meeting. So that we can try and collect topics so we can have a little bit of structure around the meeting. So think about topics you would like to talk about and these don't need to be super complicated in depth topics. I mean, sometimes just good questions, you know, new user experience. It's a good chance for people to talk to the developers. We don't have very good insight into how people learn to use the Yachter project and how to get started. Because most of us have been doing this for so long, it just seems natural to us. And we know we need to work on that experience. So we hope this week that there might be a force them again, like in a physical event. Last time there was a physical event, we had an open embedded workshop at force them, which was like, I mean, again, a bunch of people who care about OE just coming together, talking with speakers and everything. And that was actually great. Yeah, we have about six hours of video. So that's for the next weekend. So we do want, I mean, if that happens, we do want and we will try to have an OE event again. So again, I mean, we just learned about that yesterday. So we'll see. But if you want to help us with that, you know how to find us. But that could be a good way to meet again. Of course, we cannot talk about updates without talking about the next Yachter release. The one thing Yachter is good about is there is always a new release soon. We've been making a release every six months or the last, I mean, 10, 11 years. So 4.1 is expected next month, within the months from now. And yes, I mean, there is usual set of updates and improvements, I mean, update to the recipes. Something which is, I mean, significant updates. We have talked about rest for different reasons earlier today. But we also have seen some significant rest improvements in the core. And something which is also, I mean, a big request for many, many years is the layer setup stuff. So for, finally, we have something that has been merged to actually allow, like a standard for how we are actually supposed to manage layers and more than just pokey. So we'll see, it's just the beginning. We want that to become the standard in the future and we finally merge something. Release will be out hopefully within a month or now. LTS, we started discussing talking about the LTS, I think three years ago. I think now we can say it's a big success, it's being used. I think we are going to even do a little game here, but yeah, Dunfell was the first LTS. In case you don't know, the York to LTS are supposed to be for two years. When we reached two years for Dunfell, there was like this idea that maybe two years was not enough. So we are doing an experiment to maintain Dunfell for four years. It does cost resources, money to actually, and it actually is a very big community for to maintain LTS. So we are doing an experiment with an LTS for four years, which is we are in this very old situation where Dunfell and Coxton are both going to be supported until the same 2024. We'll see what happens. Maybe the experiment will tell us that we should not do four years. Maybe it will tell us that we should do four years. So that discussion will happen at some point, I expect next year. But yeah, so this is a significant effort for the project. There are like a thousand changes every year now since we started just for the LTS. So we have a dedicated mentor for that and a lot of people are actually working on the LTS. So it's a significant effort from the project has been a lot of discussion to start it and I think it's a success. And maybe it comes with issues as well, but so far I think it's been a success, again. Okay, so we tried this on Monday and I used a very bad format. So I'm gonna try a new format. Who here is developing or maintaining products based on releases older than Dunfell? And don't be embarrassed. So who here is maintaining products based on Dunfell? Okay. Who here is maintaining anything on Kirkstone? Is there anyone in between Kirkstone and Dunfell? Okay, so there are people not on the LTS in between. And how many people are building against master? All right. Okay, so a bit more variants. Sorry, what do you see? Master next? Who is using next? Okay, again, this is going to be a big discussion next year. What do we do with the LTS moving forward? And yeah, so I mean, we might actually, this is like kind of a little game today, but that is some idea. But yeah, we are going to have this discussion at some point, so. Yeah, all right. Okay, so one thing I actually like to do from time to time is to look at, I mean, okay, how are we doing and how many changes, how many developers and so on. So, and yeah, there is a little story I wanted to actually share today. So if we look overall, I mean, since a long time ago, what we see is that the project I've been fairly stable. I mean, blue is the, all the changes, the comments that we do to make a release and the yellow is what we do after the release or in the stable branches without many surprises. Dunfell is actually big in terms of the LTS. But what we see is that, I mean, overall, I mean, we have been able to produce, I mean, a stable amount of patches and it's basically like the sign of like a stable community and a stable project, right? So we have data recipes and every six months and it looks good. So now if we actually zoom a little bit and if we actually look at who is actually contributing to the core of the project and if we actually look at who is actually doing half of the work in the core of the project, what we see is a sign that, I mean, maybe we have like one problem that we see is that we have less and less people who are actually contributing to the core of the project. So in blue, this is like, how many people contribute 50% of the changes every year? And so what we see this year, we've reached the point where only four people have actually made 50% of the change in core. Is it a problem? Is it a big problem? Is it not a problem? I think it might be a problem. We know that some of the maintenance are actually way overloaded. And this is basically like something I wanted to tell everyone that you know. I mean, so we are looking for help all the time. We are always saying, Richard is all very often sending email and asking for help. What we see on the day to day can actually also be seen by just looking and doing some good forensics. So there is a sign, I mean, this is post COVID. So there are many different things happening. We will keep an eye on that, but definitely we need more people to help with the core of the project. We know that Yocto is used everywhere. We know that there are tons of very happy users that never complain because it just works. Maybe it's a bit slow to build, but that's the only complaint we hear about. But in the end, everybody can make product because the core is there and the core is maintained and the core improves. And this year, again, four people have made 50% of the change. In the red, it's how many people it takes to look at 80% of the change. And yes, so there is a trend in the last two, three years, which is not looking good. Now it's up to us to act on that, to think about that and to see how we can fix that. So we are going to look at how we can attract new members to participate and join the project, maybe found more developers, but we are looking at the current members who maybe spend more time on the core or we are looking at community members who also help with the core. So there are many, many things that can be done, help us, I mean, I think last time, I'd say that documentation is one place, one easy thing that people can contribute and help us with, but it's not just about the documentation, it's about the core. If you actually do these numbers for BitBake, I mean, the situation is even worse. BitBake, which is the core of the core. There are even less people who are contributing to it. Again, it's a signal that we see and we need to be aware as a community and we need to think about that and to see how we can improve this. We are very lucky, again, that we have many users and it's extremely successful, but we have to also look at what makes these projects so that we can keep being successful. So, if you want to start helping, one thing that comes in every box, so before you ask the question, we have a slide today. How do I start and how do I start contributing? Again, we have had this question a lot and I think Michael, who is with us today, has done a very good talk at the last Yachter Summit and we actually kind of ask him to do it. We wanted to have a place where we can send people, you want to start contributing, you want to find a mailing list, you want to know how to send a patch, you want to know how to join our community. There is a really good talk for Michael and this should be something that everybody wants to join us should actually look into. So, we know it's slightly difficult for new people to join. There are a few things which are complex with the Yachter BitBake Open Embedded, but in the end, it does complicated things, so you cannot do, it cannot be simple, but we are usually known to be nice people, welcoming people on emails, most of the time. So, yeah, we encourage everyone to join. So, that's what the story is we wanted to tell you. Now we have, I actually don't even know how much time we have. I think we have 20 minutes, 25 minutes. The usual format is that we're here, we are here to hear the questions and to find in the room who can actually answer the question because most of the time, we don't know the answers ourselves. So, if you have a question, say it out loud, we will repeat the question, I think we have to repeat the questions and then we'll find someone to help with the answers. So now this is the most important, oh, usually we have an issue with the first question. What is considered core BitBake? BitBake or Open Embedded Core, the documentation, I mean the Pocky, I mean Distro, which, yeah. I'm sorry, I forgot to actually repeat the question. The question was, what is the core of the project? So that's basically the Pocky Geetree, in a sense. Yes. How do I model that in a recipe? Okay, so. So there was a really good presentation today about S-Bomb. And so what you have in application and you want to know how you split your, how you can actually design the recipes where you're in application which includes multiple applications. So they know our defense because it does not depend on the delet recipe. But it's like the delet source code is part of the application's code base. Oh, you're gonna capture it? Yeah, then. So what I already do is like it's part of the licenses. That's fine. But how does it end up that in the S-Bomb? A, well, two answers, the first one is code. The other answer is the S-B-Dex code should be putting the Pocky in again first. So it will contain a map of all. This one really has the V source code in it. So you'll be able to see the bit to bit there. How will you then backtrack back to, and this is actually that live, not the process, that's the, just you walk out with it. Yeah, the other comment I would have is if you have something that includes the source code tracing back that you're using Zlib, isn't gonna be very helpful because you're gonna have the local copy of Zlib and that's the one you're gonna care about vulnerabilities in. So if there's a problem in the local one, I would expect it to be reported through the project that includes it. Yeah, yeah. I mean, this is definitely, yeah. Yeah, yeah. Yeah, I mean, I think it's a very good question for Joshua and some of the other S-Bomb people is what is the right thing to do in that case? Is that something you can bring on a mailing list? Do you think? Yeah, I can do that. Hang on, can you talk into the microphone? Yes, so there was a discussion about S-Bombs just happening today and the mailing list, so subscribe to it right now and jump in. Just one practical thing we have. Experts in that discussion. We have one more microphone here. I mean, it might just be simpler than just. I understand you're talking about something that's a commercial product or something like that, right? But in Fedora, in Debian, this is rendered, right? You've rendered your libraries, right? And so the general approach that I would take is actually to, as everybody's probably already hinted at, like, de-vendor that, right? So you want to actually use the Z-Lib or Z-Lib, sorry, that's available in open embedded and figure out if there's patches that are on top of that, right? So it might be a little bit more painful. Now it gets worse if it's a specific older version of Z-Lib. Yeah. So then what you, but you're still not done, right? It's okay. So then what you do is you go and you look for, was there a version of that recipe in an older release of Yachto Project and then grab that recipe and make that recipe work? So you still get away from the vendor Z-Lib, but you're still building it with Yachto Project and you get all the benefits of that, but there's still a little bit of more work there because you might have to change the syntax to the release you're on and things like that. Or we find out what the right answer is for vendor code with respect to S-Bombs. Right, so there's the S-Bombs question, but also just in general, all of us are going to tell you to run away screaming from vendor code. It's a bad idea and the reason isn't because of the S-Bombs stuff, it's because of the vulnerabilities and security stuff. You're not going to find it. You're not going to be able to decouple that, exactly. You're not going to decouple that and you're going to end up with security problems in the future. But the question really was about the S-Bombs. Yeah, I understand. There may be something we can change to actually support that. I mean, so it works with S-Bombs. It's a great question, the S-Bombs people. Okay, thanks. Thank you. Oh, sorry. Who went first? Oh, what is this new workflow? The meta? Yeah, her. I mean, we don't use GitHub in the call. But you mean on Meta Open Embedded? Yep, probably yelling. That was Hem's thing, so Hem is not here. I don't think we can give a meaningful answer. It would be nice to have him on the phone at least. I'm going to digress. Oh yeah, it's not for the core. I want to digress a little bit because I've been meaning to say this all week is I very much understand people's desire to use a GitHub workflow. Richard is very married to using the email workflow because it's what it's used to and it's hard for him to change. I would also tell a story from the good old days when Open Embedded was running in BitKeeper and so we're dependent on a proprietary workflow which the GitHub workflow is proprietary. If Microsoft decides to chase everybody off to make money, we would have to change our workflow to something else. And I know that when BitKeeper closed to open source projects, you know, it was a year or so of bad times for Open Embedded because we had to find a distributed version control system very fast and it was not fun. That said, we are very interested in the conversation about does changing workflows, what workflows should we undertake, you know, given what I just said, how can we be more inclusive for developers as well? You know, is our workflow turning developers off? And that's a very good conversation to have with us offline at the booth or things like that. And with changing the workflow address, the core developer issues, I mean, that's a good question, we don't know. Yeah. So for, well, we, Cam is the one doing that. I mean, so Cam for Meta Open Embedded is actually, sorry. Thank you. We need to get used to that. So the question was, so then at the last developer summit, we talked about using some GitHub based workflow and the question is, do we have results? I mean, do we have new people or contributors? Do we have way more developers because of that? So this is not used for the core, your core, it's being used for Meta Open Embedded and no, I mean, we don't have now, but this is a good thing. I mean, we should check. I mean, I can talk to Cam and see if we have data. Cam is actually maintaining Meta Open Embedded and is taking both pull requests or email. And so we can actually, I mean, I suspect it's going to be a lot more emails, but it would be interesting to see if the GitHub workflow is actually bringing new people. That would be actually an interesting data. So we can look into that. Yeah. I mean, the problem with using it for core is there's a lot of automation built up around emailed patches and the automation would have to be rebuilt to work with pull requests as well. In my personal opinion, if you want to introduce this GitHub style thing, you shouldn't be starting with GitHub. You should be starting with SourceHut because that was built from the ground to combine the web, the pull requests and the emails. And GitHub just isn't. I mean, the idea of having a place where there would be all the layers and all the, I mean, this is, I mean, this has been discussed. I mean, this is, it looks compelling. Like if you look at what Debian's doing, where now I mean, I think pretty much all the packages are on the same place, that looks compelling. I mean, are we going to get there? I mean, I don't know. I mean, this is definitely the thing that discussions that need to happen. But when we start saying you want to use GitHub or one of these web-based workflows, right? So now you're expecting all of us to review and contribute and comment in a place that we aren't actively using now, which means we have to do it in two places or three or four or 10, okay? The other problem is it's not just the vendor problem, is where is the history of all of this discussion? So mail archives are really, really easy to store and keep, but we are at the mercy of the vendors for the history of all the discussion and everything that happened there. So like a lot of communities use Slack as their chat, right? But if you're not using a paid version of Slack, a month goes by and you lose all the history. So there's problems like that that are the reality of why it's not an easy thing to do. But SourceHot may be the solution there because it's explicitly about email workflows with a web. And I think that's the viable way and GitHub isn't. But when we hear that people ask for GitHub, I mean like it has become the kind of digital place where I mean a lot of software is being done. GitHub is becoming like Xerox. Yeah, but again. Well, by GitHub what is meant is that how can I send a drive by patch quickly without subscribing to mail in this time? That's very useful, of course, but it doesn't have to be GitHub. Yeah, I believe he was next. Go ahead. It doesn't read head changes, which... Okay. It just might be like, if you ask around Roche with different vendors to change the piece as a package and not only Linux, Microsoft, everyone else, if I'm using GitHub, like if I rely upon the base core into the provider that these will be expediently changed like that would be put upstream, it doesn't need to really track that once that kind of flows. So what I'm gonna say there is if you know that there's a package... Can you repeat it? Oh, okay, so the question is, how rapidly can we make changes in the core in master when we see a problem like time zone changes in Chile? Or security, but someone changing their time zones and there being a bit of a rush to update things. What I'm gonna say is certainly raise the issue on a mailing list so we're aware that there's a problem because we don't have a lot of developers there that may not be familiar, but in general, the time zone stuff is gonna be in core and it's gonna be watched pretty closely in terms of how often does the automatic upgrade helper run? Right, so in master we twice a month run this auto upgrade helper that goes over every recipe and checks if there is a new upstream release and if there is then it tries to run DevTool upgrade on it and more than half of the time that succeeds and produces a patch that can be placed directly in master. And yeah, then it's on somebody, usually me, to actually do this and so for things like time zones the cut-ins is twice a month and then that gets back ported to Danville and other LTS releases or supported releases as well. Yeah, and if you're on us, a stable release, you might have to email the maintainer and ask what the status of getting the fixes back ported. Okay. And I mean you ask about TC, but I mean we also do that for CVEs and updates in general so we track the CVEs and per branch and everything. So this is a general cleaning work which is happening in the community, yes. So just really quick, as far as CVEs, the stable maintainer for the LTS releases, Steve Sakerman sends out an email every Sunday for master Danville Kirkston for the CVE checker report. And just recently we actually added the CVE check into the auto builder. So that checks actually also running on the auto builder. So if it's a CVE, we're going to catch it as soon as it's basically reported, right? We do our best to try to figure out if it actually applies and then we try to either fix the NVD database or take the patches to actually fix the component or do the upgrade. Now that mostly applies to master because master can take any kind of upgrades of any kind. So when you talk about the stable releases, sometimes it's a little bit harder because we're not going to do a major upgrade if it's not just bug fixes and security fixes, right? Because ABI breakage and things like that happen. But I'll give you an example, right? People were sending patches for Python. So we were on 3.8 Python and Danville and people were sending individual patches to fix CVEs. But by definition, 3.8.y Python is all only bug and security fixes. So I went in and actually upgraded us to newer minor dot releases because by definition it was, in fact, meeting our criteria, right? So just keep that in mind. And if you're looking to get involved, Steve Sackerman has a list of CVEs and Danville that need looking at. And he's running a competition now through mid-October where if you resolve CVEs and Danville, resolve the most CVEs and Danville between now and mid-October, he'll send you a pound of Hawaiian coffee that he grew himself. Now that could be a good place to get going. Say that again? Okay. There's some details. I would read the original email and not trust what I say. One point about this previous question is that those automated updates that are twice a month, the outcome of that is posted directly to open embedded core mailing list. And actually it was posted today. There was this 70-something emails from the robot. And if you want to see how this master version upgrades work, you should look at those, subscribe to the list and look at those emails. Okay, who was next? Who was next? Was it you or you? Yeah, I think it was him. I think I can take this. We should summarize the question. The question is about how to build images without GPL E3. Summarize. Right. So the modern way to do this is to restrict the GPL 3 to a specific product image, like the check that no GPL 3 is in the image runs at the moment when you actually do the image, but not any earlier. So you can build GPL 3 stuff to produce some native binaries. You can build GPL 3 stuff because it's a dependency of some testing packages that aren't actually going into your image, but this check you delay all the way to your specific image that is shipped to the customer. So that already eliminates a lot of issues. And then if you still end up with something like, for example, Bash into that product image with GPL 3, then you should look at what scripts do you have that are asking for Bash and try to take them out of the image or rewrite them in POSIX shell or there is find some way out of it. If it's a read line, this GNU thingy for having rich command line things, then almost everything has a package config that says disable read line and then, okay, you don't have read line anymore, but then you don't have the GPL 3 problem. So it's like targeted fixing until you arrive at the product image that has no GPL 3 stuff in it, which is don't even mention it. Hello. So I mean, should I repeat or should I not repeat it? So there was a comment about there used to be a meta GPL v2, but it is highly recommended not to use that anymore. So it's not supported and we do not want people to use that now. Yeah, and actually exactly in this conversation on the mailing list with facing others. Yeah, so the old way was that incompatible license was like a big project-wide prohibition and now it's been changed to be specific to a specific image. So use that. Okay. Thank you. Next. Because in the database sometimes you have very difficult conditions when the CV applies. So you have to have this package and this package or these packages last, maybe this configuration. Are those particular situations attract? Okay. So the question is, I mean, usually in the CV database, you have a complex situation where you have complex combination of, I mean, multiple packages being present to actually have a CV to be impacted. Does your support that? Yes. So basically it means a human being looks at that CV once it hits the fact that it's, that your act has been protected. So that means a human being looks at that CV. Once it hits the fact that it's, that your act has been flagged. Is that, is that CV affecting us? And then a human goes and looks at that and then makes the assessment. And then we either ignore a list or whatever the modern term is, that CV to say that it doesn't actually apply. Or we figure out where it actually does apply. Or we're getting, we're probably becoming pretty popular with the, the database that we actually help them improve their, the quality of their data to, to let them know when things, you know, really don't apply. So if it's super, super complicated like that, there's just no way we can automate that. So we, we're doing it a different way anyway. Yep. So the audience discussion is basically at some point it becomes a manual process to read the CVE and figure out the exact issue. I just, you have to take a look. So I think one minute to go, I was told we had three more minutes. So I guess we are, I'll just say we often usually look at like Debian or another distribution to see if they've already fixed and patched against that CVE or Red Hat or whoever. So, you know, that's, that's sometimes the easy way out. And that's a good way for you to help the project. If you find one of those, just send, send some patches in. Anyone last? Yes. Can you repeat that? How to reduce the build time? Okay. So, sorry. So, I mean, the short answer is shared state. So very often when we get this question, I mean, users don't use them, the downloads cash, the state cash, I mean, properly, but I think we talked earlier today. And I think you claim, you say that you are using it correctly and that you end up building home very often. And I mean, that is a very important situation where, I mean, it takes time. I know, I mean, I mean, the other answer is to use, I mean, more powerful machines, more memory, I mean, all these things. But in the end, yes, I mean, we do spend quite a bit of time building. And everyone wants to find. And you want to say more about your use case or, and you use a state cash and that's it. And so, did you analyze why you, I mean, the builds take so much time? See, if you don't have at least 12 cores or 12, you know, like 24 threads, that's kind of a sweet spot. So you want to get to that. You want to get to enough RAM for about four gigabytes per thread at least. So if you're, you know, if you're looking at building it on a laptop, you're doing the wrong thing. Yeah. If you're still using spinning rest for storage, switch to SSDs. By the fastest SSDs you can, you can get NVME, NVME, sorry, NVME. Yes. Okay, so I'll just give you an example. I built a Ryzen system. It's got 12 cores. It's got two NVME drives and 64 gigs of RAM. And that builds just as fast as a 10 year old dual Xeon box. So, and it uses a lot less power. So, but if you're really talking at scale, you know, this is a very common question. And it's not a, not an easy answer for the, for everything. But the main, the main thing is to reuse S state. If you're doing it in Jenkins jobs, things like that, don't be, don't rebuild from scratch. Make sure you're reusing prior bills. Well, I would say that there should be a state infrastructure like a state server available to every developer in your organization. So that when things have been built before, then they can be taken from cash. And nobody has to build them again. And that's like an IT problem. So you need to talk to your IT I suppose and convince them that this has to be because it saves a ton of money. Sorry. Do you happen to, do you happen to use auto ref in one of your packages, for example? So we, we changed from auto ref to fixed hash and that brought built times down considerably. We also use started to use the hash equivalency server that brought But it can be that one package somewhere in the chain triggers it. Right. So you have to inspect it. It's not kind of easy. You have to look and find out what, what triggers your rebuild, what, what's causing the issue. Can be an option. Sorry. Look at the build history is going to tell you what's going to be the Yes. Take the build history, create the, the look at the graphical output. Find the, find the, the bottleneck. Yeah. Build stat. Yeah. Build stat is, is a tool that gives you, I mean, at the end of the bill. I mean, what has been building. And so that gives you hints about where to look for. I think we are out of time. We are, I mean, we are still here tomorrow. Maybe even this evening we have the Yachter booth where, I mean, many people are there. So if you want to talk more and to any one of us, you're free to come to the booth. Thanks everyone.