 So we're gonna get going. This is the Yachto Project Birds of a Feather question and answer gathering. The way that these typically work is that we don't really have a presentation. What we do have is a whole row of Yachto Project core maintainers sitting up here in the front. Please keep your tomatoes to yourself. No rotten fruit, no nothing like that. But what we do want to do is hear your questions and to find out what it is that is important for you to ask about. And if nobody has any immediate questions, we can talk about the release that just happened and maybe talk about some of the interesting things going on in the project. So to start out, does anybody have any burning questions that they're working on? And it can be dumb, easy questions. They can be even complex questions. Let me run the mic over to you just because we're, I record these so that we can. With GPLv2, basically now that we have a separate layer, are you continue to apply security patch vulnerability patch to the GPLv2 or are you planning to rely on people doing it and pushing it upstream? Okay. Good question. Do you want to take that one? Yeah, so managed GPLv2 is a problem, particularly because if somebody's been working on the more recent version of the code, there are certain licensing questions which come into play as to whether they can even apply patches to the older one. So I really think that people are going to have to find a different way of dealing with things than trying to apply security fixes to software that's getting ever older. So I separated that stuff out in the first place because we needed to highlight there's a problem there and there is a big problem to do with contamination and licensing and that kind of thing. So yeah, I would not rely on MetaGPLv2 as a long-term solution. I mean, if people send patches and things and they've gone through the right process with licenses to be able to apply patches to it, great. But I personally can't see it and I personally don't want to invest time in that. Richard Purdy is the chief architect for the project, by the way. We'll probably introduce people as they talk. There was another question back here, Mr. Mark. So since we have like a Prem Tarty support in OE, will we get Xenomai support as well? Do we understand the question? Can you repeat the question? Yeah, since we have a Prem Tarty support in the core, will we ever get Xenomai support as well? I mean, there were patches posted but the answer was they have to go through the steering committee or something and there was no answer after that. So Xenomai is the question. Anybody? I don't know the answer. I can give an answer. It's a good question and the reason we have the Prem Tarty stuff in there was because there was somebody willing to step up and maintain it and it has been well maintained for quite a long period of time, particularly by Bruce Ashfield. With Xenomai, I think if somebody does wanna step up and is willing to maintain it and put the effort into that, that's good. It does have implications for the testing matrix and that's another one of my concerns. We don't test the RT stuff particularly heavily but I'm okay with that because I know Bruce does. So it's partly a question of okay, what's the proposal and who would actually be testing it as well. Things have changed and layers are a lot more maintainable and can handle this stuff a lot better than they used to be able to. So I'm also tending to push things like that to layers where they make sense rather than the core. So there's a high bar to entry. I'm not saying no but there needs to be a good case for it and other people need to turn around and say yes, I find this interesting as well. That answers your question. So I'm Tim Warling. I just spent the last year working on the real-time Linux project and so I did a lot of testing of the preempt RT stuff because of that but Xenomai was not on our radar but I know that for instance Siemens is looking into it and things like that so there are other people using it but Siemens does not necessarily traditionally use Yachto project. So we need to find the group of people who are interested in it, who are gonna step up, bring it in and maintain it and then we'll see whether it is appropriate for core or not. We're really, really trying to keep core slim in part because we have some legal and licensing things that we're attempting to do to make it much, much easier for people to use. I can't say a whole lot more than that about it right now but so I would suggest as Richard is kind of indicating that should go to a metazenomai layer or something like that perhaps or wherever meta virtualization or someplace where it makes sense, I'm not sure. Thank you. Yeah, thanks. Once you have like a custom kernel repo, it's stable. What are the next steps to have a BSB layer written on top of this kernel repo? Sounds like a good question. Okay, so I just, there's the EAL sessions going on. I just created a distro layer and a BSP layer as an example and wrote up a lab manual for that. So I used, so if you look in OECore there's a metaskeleton layer and in that there is a Linux custom recipe. And so that's one place where you can do the traditional approach where it's a git repo and a def config, right? So not, so the Yachto project traditional approach to, you know, in Linux Yachto's we use config fragments and even though those have been supported in mainline Linux for since 2011, that's not most people's traditional approach to it. But so the next step is you actually want to create or you might want to create a machine or append a machine, override a machine, something like that to include that kernel. So if you look on GitHub at the EAL, there's several approaches you can take there. So basically you need to get that kernel recipe into a layer somehow so it's being picked up by BitBake and you can actually just give it a unique name and then have in your local.conf or if you have a distro layer in your distro.conf you can add the virtual kernel pointing to that specific recipe name. There's a whole lot of other ways to it but that's kind of the basic approach should be really you would put that in your own BSP layer even though that might be the only thing there. Yes, so basically that's why there's layers. So you want to be creating in this situation you want to be creating a BSP layer. I would highly suggest you probably also want to be creating a distro layer because you probably are doing a product, you probably want branding and that's where that should go. The machine type stuff is where the Linux kernel could be defined and that should be in a BSP layer. There are times when it could be appropriate in a distro layer, that's a little bit gray area but in general the Linux kernel recipes should be in your BSP layer along with your drivers and if you're doing device tree or whatever else you're doing, U-boot, those kind of things could all be there. Bootloader, whatever it is. So that all belongs in the BSP type layer. And again, if you look at GitHub at e-ail, there's a lab that I just did. I created all the metadata, I wrote up the lab for it. Yeah, and there's other places but I just finished that last week so. Thank you. Any other questions going on? Yes. Hi, I'm new to Yachto. I have a question, does Yachto support Docker files for new users to set up quickly? Another one for Tim. Thank you all for asking questions about all the stuff that I have been working on. Okay, so last year I was working on the crops team which was addressing Docker usage to enable Docker to help us in Yachto project and so on. So in core, we actually added the ability to create container recipes. So if you're talking about actually running Docker, creating Docker containers in Yachto, if that's what you mean. Yes, so the meta virtualization layer has recipes for Docker. Core has the ability to create container images that's not necessarily limited to Docker. It could be any of the other container types that are out there. Essentially what it does is it uses Linux dummy so that it doesn't have a kernel because most of these kernels, most of these containers do not have kernels in them. There are exceptions to that, like clear Linux or clear containers but. And so yeah, so it is definitely in there. It's doable. I'm not personally really up to date on what's going on there but I know that there's been activity in various places, including inside Wind River that have been leveraging that. Does that answer your question? Good. Yes. So what's the vision for Yachto say, and do we for two to five years from now? What are we gonna, what will it be? How is it going to evolve in the next two to five years? Flying cars, things like that. Do you wanna tackle that one? No, sorry. I don't know if you want me to, I'll talk about flying cars and stuff. No, I think that short term, there's some definite short term problems with the project on our problems with challenges, such as the maintenance ship and how we keep all the recipes up to date, how we extend some of the automated testing and how that applies into other layers with an ecosystem. Cause I think open embedded core has kind of been leading the way but we need to take some of the things we've been doing there and roll them out. So certainly those are the immediate things that I've been worrying about. And going back five years, I think we did have a good idea of the particular feature sets and so on that we need to do, such as recipes with excess routes, such as multi lip support. I think that, and so there's definitely gaps in the project. I think some of the cloud pieces, such as the Docker containers, Tim's just talking about crops. There are definitely areas where we've still got work to do. But I'm at the point where I'm just trying to think, take a step back and think, okay, we know what we need to do short term. I think the long-term vision is currently just being rethought about. So yeah, that's my take. And I think that we're probably safe to say that we're welcome to community input on that. Are there specific issues that you would want to address or that you would want the project to cover? Okay, let me hear about that. Well, I think, I mean, I don't know if, I'll ask Tim a meta question. Do you want me to bring up my question that we discussed? So I mean, one just, this is almost a short term thing, but it evolves into a long term thing. Can we get the project to align on the long-term support kernels, for example? We seem to be missing by a version or two. And I know in my project, Automotive Grade Linux now, we're gonna take Rocco and forward port to 4.14 in order to take advantage of long-term support. But so how do we, for example, here's the question I posed, the question I posited last night, which is how do I take Greg's new release of 4.14, for example, and get that upstream to OE and Pocky and then into my project in throwing out a timeframe two weeks? Do you want to answer that one, Chai? Just a little explosion from the peanut gallery back there. So specifically on the LTS kernels, we do have an LTS kernel in each of the releases, but you'll find that for Rocco, it wasn't for, I don't think it's 4.14, because 4.14 hadn't shipped at the time. Yeah, right, so I think it's 4.9, is what people are saying here. I can't remember off the top of my head. But the point is that if we have a particular LTS version that we've had in that release, we'll continue to roll with that release and roll with the updates for that release. What we won't do is take a completely new kernel version and pull that back into one of the older releases. So you're seeing things like GCC updates. I mean, you mentioned the releases just come out, that's actually the 2.42 point release. And that has GCC 7 upgrades in it. And we did upgrade GCC 7 from 7.2 to I think to 7.3. Anyway, we upgrade GCC. And when we say upgrade, it's down a stable branch of GCC. So those are just bug fixes and security fixes and those kind of things, they're not feature changes. So we do something similar to the kernel, a kernel version would have the point numbers for the same reason. But we wouldn't go and back 4.14 into Rocco. That's something that somebody can do like AGL if you've got specific use cases, but it's not something that we would do within our current stable framework with the project. So I can add it to, so there's the testing framework, which is actually making your life easier if you're doing that on your own. So hopefully you'll be able to take that, I believe, and validate your kernel against the test metadata or all the test infrastructure that's currently very good in shape actually, that should help you kind of accelerate your validation. Just as an example, so the newer kernel versions require open SSL as a dependency during build time now. And if we tried back porting that, that will then break a load of stuff in Rocco and we'd then have to go and fix it. And so that these things do bring in changes that cause problems. And that said, I think the way the metadata is structured, you can pull pieces and back port them comparatively easily. So we have our policy for core, but it should be straightforward and tonight, AGL. I think the key is to keep the communication going between the projects. Okay, I don't, so I understand that people want something that's stable, right? So they're picking Rocco cause it's got a couple of point releases. So that absolutely makes sense. But again, we're in a pickle because as they just said, there's a lot of things that are gonna break when you, especially as big of a jump as 4.14 just happened to be. And when GCC changes have to come in, G-Lib C changes have to come in and everything. So all of those things, unfortunately they come in so early in the dependency tree that they're basically the first things that come in. And so everything else afterwards was dependent upon that. And so this is why I try to nudge people to be on the latest release, which is Sumo is gonna be coming out. We had a long discussion about that last night, but I'm just trying to, in general, I would be suggesting to people if you want the latest LTS and that is 4.14, that we're on 4.14.24 right now on Master, which is feature frozen now, which will be released in some time from now. Officially it was supposed to be April 24th, but I don't really know where we're at. I don't know if any of us are 100% sure where we're at right at this moment, but yeah. If you were me and you're planning a release in July or June or July, I can't take the risk of moving to Sumo. Between now you've, the other thing I just want to, I also wanted to ask about was QA, there seems to be things are slowing down in QA like the 2.3.3 has taken forever to get out of QA. So one of the things that's going on there, the reality is that QA just moved from Guadalajara to Penang. So there are realities that happen. So that is one issue. The other issue is we've had some very, very difficult to fix changes that came up in this release cycle. That's actually happened in the last couple of release cycles. And we are a fairly small number of people, the actual people doing this work is you probably be shocked because it's certainly less than the number of people in this room. And the amount of work we do is staggering. So anyway, so there's just some realities there. My Pat answer to you, and I know this is not the answer to you specifically, but in general, this is why there's OSVs. So this is why we have Montevista and Wind River and Mentor Graphics and people like that. When you need specifically you want that kernel support, you can get it from those OSVs. The other thing is we have some very, very capable consultants like Consulco and Bootlin and others that can also do this for you. So yeah, Chris. So I'm just following up on that, that kernel questioning. Okay, so I don't have a host in this, right? So I'm asking purely out of curiosity fed by the original question. So if you guys, the project doesn't want to support a newer kernel in a shipped release branch, but you have members of the community downstream who are doing that work, what would your guidance be to, if there are multiple parties who want to use, say 414 and those guys should be sharing resources and collaborating. What guidance would you give them to be able to exchange and work together? I think that people need to get together and collaborate on that stuff. So part of it is figuring out, okay, who has this problem? Can we share that code and what have you? And then, yeah, find a place on, I don't know, Pocky Contrib or wherever makes sense to actually share that work, show it to others and then, yeah. I mean, the OCTO project's been doing its stable releases and we have sort of like a rough two-year timeframe for that, after which we don't really go beyond that. But we have been asked, well, okay, what would people do if they have patches for older releases? And the answer there is that we're willing to create branches, but we need to differentiate them from the core process so people know there's a stepped sort of functioning quality that they're not running the same tests and so on. So, yeah, it's something that we're figuring out, but definitely get people to collaborate, share branches and yeah. So this, my goal for this year is actually to do a whole lot more with runtime testing. Instead of preaching about it, I'm actually going to just make it happen. And that's one of the things that would be necessary here because in order to drop the quality validation cycles, we need to have more automation instead of having it be done in any kind of manual or reserve the manual stuff for what it should be. So that's just kind of another issue that's been waiting for a long time and that's something that's a goal of mine to address this year. In the case of a kernel update, because I maintain they met it, open embedded, and great. So I like when core updated their kernel, it affected met open embedded and we had to update all the packages. So that's probably what you're gonna start seeing happen. If you're bringing a really 214, 215, 216, you'll start breaking other layers that you're then gonna have to go fix. Cause that's what I'm seeing with met open embedded. Same thing with the tool chain, when we do that, it just ripples through all the layers. Do you have any plans to add a UBoot menu config? We see a lot of K configs in UBoot and I see a lot of customers asking for UBoot menu configs. And one of the few customers I see, they ask for adding packages like root FS menu configs also. They see like searching a lot of dependency packages. They're not Yachto service, but they need some of the packages like similar to older a free scale LTIP kind of menu config to select the packages. That will be a good roadmap. Okay, so creating, I mean, effectively what we're asking for is a GUI around package sort of selection for things in images. We have tried this several times. So there was originally hob and then we got, which was a graphical user interface that allowed you to do things like that. And then toaster was another set of developments in that direction. I don't think it got to the point where you could do that, but the idea was that that would be somewhere where that functionality could go. Menu config is great up to a point, but with something as complex as we've got, it doesn't really scale to that. Certainly not to show you the information that you would need to make decisions. Toaster would be able to cover it, but we're struggling to get people to work on toaster. So yes, I love the idea and I'd love to have it, but somebody's going to have to step up and make that happen. And so far, that doesn't seem to be the people stepping up to do it. He speaks to the quality of the project as a community project. Any questions? Yes. I was wondering if you could maybe talk about the testing that you guys are doing. I guess what kind of tests are you running? How could I possibly replicate the same tests with my system and if the answer is, go read this document, that's great too. It's worth giving a quick summary of it now because I think it's probably interesting to a number of people in this room and I think it's something that we don't promote enough because a lot of people don't realize it's there. So we've got multiple different levels of tests. There is a, so one of them is image testing. So when the system builds an image, it's good to figure out, okay, does this image boot? Does the functionality in that image work? So that's what the test image code does. So there's actually a test image class and when you inherit the, I think if you inherit test image, you then get a test image task and then that can be run automatically after a root of S is generated where it could boot it up in Kimu and then run the tests against it. There is some hooks in there which would allow you to interface that to a real piece of hardware as well. So it's not just limited to virtualized stuff but we don't tend to use that so much. We use that, but we use that on all our infrastructure to boot up and then runtime test all of the images that we generate where we can run them under Kimu and that's pretty much everything. So when they're all to build, so we have an auto builder, auto builder at Yachterproject.org that's running all of these builds. Usually we're now pretesting things because before they go into master, that's building across all of the Kimu machines. So that's four architectures across 32 and 64 bit across two different distros, Pocky and Pocky LSB along with a whole load of other configurations and that will build the images and then actually run them under test image. That has some other sort of partners in crime so to speak, such as test SDK and test the extensible SDK. So there's some other tasks that correspond to testing those artifacts. And then if you've got test cases that are more higher level sort of workflow type tests then we have OE self test which is a huge collection of tests for things like dev tool, recipe tool and workflow type sort of situations where you wanna build something, change something and then check that something rebuilt or those kind of workflow sort of tests. And there's also P test. P test are where specific pieces of software have test suites with them. So you might run make check and it would run through and run a set of tests. Those we capture them up into P test packages which you can install into the images and then run the tests there to get those results. So there's an awful lot of testing that you can do and we run OE self test. We run the test image, the test SDK and so on. P test, we do run more as part of the manual QA process right now but we're looking at automating that and then automating collecting of the results. So yeah and there's also a bit big self test. So there are unit tests on specific parts of the project as well. So P test is that's when I say runtime testing I'm talking about that. So the P and P test is for package, so package test. So that's for individual packages that you've installed into your image. And so I'm the co-maintainer of Metapython and Metapurl and also some of the other meta open embedded layers. I've been around open embedded for 10 years now. So just so happened that the last year that I spent with the real time team I ended up doing a lot of hardware testing and so I actually did the testing with lava as the hardware framework. So I'm actually giving a talk tomorrow about that. So I just introduced into core a new class. So it's P test dash pearl which makes it very, very easy to have pearl modules run their tests and that was easy to do because pearl's got very consistent testing for individual things. My next target's gonna be Python and then we'll keep going from there. But one of the problems we have is that as people are introducing new recipes especially to a meta open embedded but also to core we don't actually have complete coverage of P test. So even though whatever that was that you're introducing had unit tests that could be run nobody bothered or took the time the extra time or even knew it was there to actually create the P test thing. So just very briefly what P test does is it runs a script that you have to provide that's called run P test and that's basically just a bash shell which might just be as simple as calling make check which obviously you have to depend on make and things like that or it calls something else. So it could call talks for Python or whatever. So it's really not that difficult to do but you do kind of need to wrap your head around it. I think we've realized that that is a particular point that needs to be documented. So that's my plan for the next couple of weeks is actually to capture some of the things that I've just done very, very recently in order to share that experience with other people and try to get more community help with that. Great. Any other burning questions? One thing is going back to what's going on with the project, one thing that I can say is that we've been doing a whole lot of documentation work and we happen to have our documenter is here. So if you have any specific burning issues with the documentation or requests and would like to talk about them, this conference is a great time to do that. Also there is a new Yachto project website that just launched last week and we're anxious to hear feedback on it so if you would like to come and talk to us at the booth, we would be very grateful if you take a look at it. And I don't know how we're doing on time but I think we're getting down towards the end. Dev day, that's a good thing to bring up, isn't it? Thursday, we are running, I think it is the ninth or the 10th Dev Day. It's a developer day. It'll be offsite, we'll provide documentation for it. It's a paid event and what it is is a day-long training that you will spend with the folks sitting here in this room working on the Yachto project. There is a beginner section, a beginner track that will take you from zero to 60 very quickly and an advanced track where we cover a lot of the advanced topics that we've been talking about here in this booth and more. So if you have any interest in that, come and talk to me. I'm sure we can figure out a way to get you there. So any other questions? Oh, yes, one more question here in the back. Our package management is ported to DNF. This is from Fujitsu, I think you'd plug, where would we plug in for a demo up here in front? Fujitsu is a long time, oh, sorry, long time contributor to the project. Audio, sorry, is there a VGA connection up here? This looks like VGA, something's happening. Can you see anything? Does anybody have any further questions? Yes, David, say that one more time. Oh, the raffle. Yes, at the booth we have a couple of copies of Rudy Streif's excellent book on the Yachto project. We raffling one off today, one at the booth call and one tomorrow. So stop by and get a raffle ticket. So I have a question sort of related to a question asked earlier about Docker. Is it possible to have a base Docker image that does run Yachto? I'm not quite sure what that would mean considering you don't really have a package manager on Yachto, but it still could potentially be useful. Also, if you have recommendations on just what training courses would be best for Yachto, that would also be really useful. I might be able to answer the basics on that one, but did you guys want to take that one? Okay, okay, so the question was, is it possible to have a base Docker image that is running Yachto? And by that I think what you mean is a running Linux that was created with the Yachto project. That's certainly possible. I mean, people run Linux in containers very easily. We do not have a, we were talking about a package manager and that might be a more interesting question about package upgrades, you mean, and updates. My thought there was typically with Docker, you have your Docker file and you're using apt or whatever other package manager within your Docker file to be like include all this stuff in there. With Yachto, I mean, obviously a system isn't built that way, so I'm not really sure like how you would make a base image usable, like how would you get everything you needed into there? I guess you could copy pre-compiled binaries and do it from like the system you're building the Docker from, so I just wonder if anyone has done that or what the thoughts on that were. So I'm trying to remember if we've done exactly what you're saying, but it's certainly possible to take one of the published artifacts that would be in a core image minimal, core image Sado and actually create a Docker container out of that. I don't remember for a fact if we've done that or not. So there are on GitHub slash crops, there are containers to run the tool chain, so there's a Pocky container that'll run the Pocky, you know, BitBake and all those things for you, but that's a different, that's not what you're talking about, but we, again what I said earlier, we do have the container images or image class capable of generating an image from Yachto, from a Yachto based build system or open embedded based build system. So the second part of your question was about package management. So we actually are package management sort of agnostic because we actually support Debian packaging, RPM packaging and O package. So there is indeed package management there. The part that most people who are used to traditional distributions don't quite get is you're building your own distribution. So your distribution that you are building, as we were talking about or alluding to earlier is based on a particular kernel, a particular GCC version, a particular G-Lib C version or muscle or whatever you're using, right? But all of these things are really, really super important to whether those binaries that you've created in your package feed are going to work or not. So there's also an issue of anybody actually producing binary packages for public consumption because you are now signing on to a different kind of legal implication than you were when you're just providing the source code in order to build that. And so this is sort of an issue that the Yachter project itself would have. We don't really have an easy path to providing a package feed that you could then go and point to. But we have, I think it's in tips and tricks, there's a thing that talks about package feeds and how to actually generate your own. And it's actually quite simple to do that. And then you could use that in Docker did. So if, say inside your company or inside your entity, you could create your own package feed. You know, everybody's on the same version, the kernel and GCC and everything. So none of those things are gonna break. And then they can use their own, use that package feed either to update their own images or a VM or Docker container or whatever. So that's definitely possible. I just don't think we've gone all the way there. I mean, of course we intended to. I just don't think we've gone all the way to exactly the model you're talking about. I would also say that most likely, it's a Wind River or somebody, some of these OSVs are indeed gonna have package feeds and other things like that available. So there's a different, once you go to the commercial side where you're paying for the licensing and everything, they have gone to that extra hurdle to do the legal stuff. So there's just a difference there. So just speaking from some limited experience, you can certainly use Yachto to build the root file system and then Docker file, you can build a Docker image with just a root file system and no need to run additional steps to populate any additional packages or configurations. So there is no need to also have a package manager in the Docker image. That Resinio is one company who does some work in this space and they generate Docker images in their build system, which is based on Yachto. Although I'm personally also curious about the work being done in core for this generation of these images. There's some other challenges, for example, configuring services inside of a container. And so there's a variety of other interesting topics that come from generating container images in the Yachto build system. So that sounds like some great ideas to put into the wish list for future releases. And I'm exactly, so I know we, it was a couple of releases ago, so it's about a year ago that we got the basic container image stuff in there, but we definitely did not go a lot further with the container image class. So that's certainly a spot that could be interesting now. And also, I think the world has changed in that direction a lot more than it was even a year ago. So it's an interesting concept to think about adding more support for that. But we do have a lot of challenges with what we can support and what we can add. So we just have to figure out what the priorities are and things like that. Are you ready? Training. Yes, so Linux Foundation has training. We have a boatload of training sources there. If anybody else is putting on training, send us a note and we'll get your stuff listed on the website as well. And a big part of that, we have the Linux Foundation classes listed there. So in a map of where around the world these things are as well. And a map of consultants that also offer training. So the new website has a lot of new features, so keep digging. All right, with that, I'd like to introduce, oh, one more question, I'm sorry. Any plans to support the after build on Windows 10 with Ubuntu subsystem? I think it's an interesting idea. Well, I mean, I don't think we've got plans. It's, we'll see how it goes. Okay. All right, this is a demonstration from Fujitsu. We developed package management for Yachto project. And now we put it to DNF. This is Yachto project binaries for RPMs. At first, run environment to set up script script. We added DNF script for Yachto script. Package management, and at first init, next, launch text UI, and select packages, install call, and install dev packages. It's selectable. It's installed for root FS. This option makes source archives, and this option makes SPDX archives. This is installed result. That's all. This package manager is opened in GitHub. Please get or join develop with us. Thank you.