 Okay hi everyone, hopefully this is working. I'm Richard Perdy, I'm the actor project architect and the fellow of the next foundation and I'm here to answer some questions. I don't quite see any yet. Okay I've got a message at least that at least things seem to be live and seem to be working so that's so far so good. So yeah if anybody has any questions please please ask away. So I'm being asked how did I first get involved in open source. My history was, I actually bought a small device at Sharps Arras and it came with Linux on it but it was a 2.4 kernel and it had certain limitations. So I started messing around with that. That led me to build systems to what was build routes at the time and then this project called open and embedded and everything grew from there so that's how I got started. So I'm being asked what the plan is for the Yocto project LTS releases. It's being asked how often the release is going to be marked as an LTS. Our current plan is probably every two years so 3.1 was released six months ago so that will be our LTS release for the next couple of years. That's the plan right now and it's great to see some of the traction that we're getting around but the LTS release it seems very popular with people so that's good. So a question now about reproducible builds. What's the current status of those in the Yocto project? We've made a lot of good progress so we now have automated reproducible reproducibility testing on the auto builder. I think that's currently working around core image so everything up to core image sato is now basically 100% reproducible. There's a lot of different definitions about what reproducible means. For us that means that it doesn't matter which distro you build it on or the path that you build it in it will always give exactly the same end result and we're testing that across multiple different distros at this point so it's quite an extensive test and so everything up to core image sato is the answer. I'm being asked about the 10th anniversary of Yocto project. There's actually a presentation about that. Geoff Rowe and Nico are passing present community managers are giving a presentation and I think I even feature in that video. So I'd say go and watch that presentation. I think for me it's amazing that the project's been going 10 years. It's great to see some of the places that it's gone and so on but I didn't think 10 years ago we'd be sitting here talking about the anniversary so it's cool. Biggest features coming to Yocto soon? I don't know. Why don't you tell us what features do people want to see? I mean we've seen some really significant changes more recently. The latest ones were the hash equivalents and the reproducibility. You know in the past we've had the recipes with excess roots and even the layer model was a new and bigger device of change at one point so there's been a lot of those over the last 10 years. It's hard to predict the future. It depends what people work on but I don't think we're done yet. I think there's a lot of good ideas out there. I'm just reading the question. So it's a question about whether there's any plans to work out the application order of variables. There's lots of ideas to do with that. I know that I did put a proposal out recently about default values. There was a number of flaws pointed out in that proposal I think. The trouble is as soon as we start changing anything it breaks compatibility and that does cause a lot of problems and one of the big advantages of what we have today is the flexibility in the power that those variables give you. So it's a really good question. I'm certainly open to creative ideas on how we could improve things but there is now quite a bit of legacy. There's 10 years of legacy for just the other project alone. So I've got mixed feelings on that. Even yesterday I was just messing around and seeing if I could actually try and make the passing a little bit faster because I had some ideas about where we might be losing some speed with that but it turns out it's only worth about 10% to 15%, not the huge gains I was hoping for. So it's certainly something we've gotten open mind to but it's also very hard to do given the legacy situation. So if you've got ideas please talk to us. My opinions about GUIs such as toaster, is it really useful? I'm actually a little disappointed that we haven't had more GUIs over the last 10 years. That's something that I would like to see. I mean toaster from a development standpoint has stalled a bit more recently and it doesn't have the sort of the critical mass of developers behind it. I think it needs to move forward and to gain new features. It's still working, it's still there, it's still being looked after but it's not really developing. So I would love to see more sort of GUIs built around the interfaces a bit bake provides or even on new interfaces. So yeah it's one of the things I think there's a lot of potential there for development. It's very difficult because the project is so wide encompassing to build a user interface around some of that functionality but when you get it right I think it's very rewarding. So yeah if I like toaster I'd like to see it grow more. I think we do need to improve some of the APIs we've got to bit bake itself but we have tried to create those. We've tried to create things like the tinfoil APIs so you can access these things. So I'd like to see more GUIs such as toaster but I also like to see more even command line type utilities directly integrate directly communicating with bit bake over likes of tinfoil. It's difficult to keep up with the questions as they come in. So about arguments to convince colleagues to migrate from migrate to the Octo project away from the Debian based OS. Concerns about reliability and test overhead. It's a good question. I mean Debian is good for some things. The Octo project is also is good for other things. It depends often what it is you're trying to do. If you've got sort of if you need to do any kind of license sort of auditing or disclosure then the Octo project is one of the best systems out there for that. You can't really do that from the Debian standpoint. I think our LTS story is a lot stronger now with some of the developments we had with the project. From a reliability standpoint I think once you have the Octo project working the way the technology we built into the system means once it's working it should stay working. It's designed to be reproducible. It's designed to give you exactly the same build result if you try and rebuild something in five years, ten years time or something like that. So I think from a reliability standpoint it's robust. It does need a little bit of upfront overhead for setup but once you've got there then it's phenomenal for being able to handle things like security fixes. If a security problem emerges and you need to patch the source code and then re-release it for some old binaries or whatever that you've probably shipped in the product. I think it's you're kind of taking control of your own destiny with something like Octo project whereas with Debian you're reliant on other people. How small can a of a root of s can pocky tiny make? That's a good question. There was a patch on the mailing list I noticed came in over the weekend that improves the size of tiny by I think it was saying 13 you know 13% reduction. It's always a feature trade-off. If you want to get rid of everything from your root of s you can have a really really small root of s. So you've really got to define what feature set of functionality that you want but if you can configure it, if you can configure Linux down to that size the Octo project can build it because we're a build tool. So it really comes down to which components you're using and the you know what functionality you want within that. So I mean if you can you can configure it down to something that's Norflath size or the EMC. So there's a question about the LTS release model is that focusing on securing pocky or does it focus on the build system? It's very much about making sure that the OS itself is secure as well. So at the moment LTS is focusing on the components in open embedded core but if there was a significant security issue you know the layers that would be something that we'd probably look at as well. And it's not just about the build system it's about making sure that the components that we're building in the core are staying you know staying secure, we're staying up to date, we're so monitoring CVs, we're monitoring issues that get reported to us but the LTS is whilst we've got a maintainer there who's putting two test cycles making the releases and pulling everything together we do require some you know help from the community in submissions to that. And I think going forward we're going to be I think we're in a good position for open embedded core beyond that I think there are things we can do there but it will need you know community help in making sure we get the patches, we get the issues reported to us and so on. So it really is going to be partly about the community collaborating together but we put all of the pieces in place to allow us to do that. It's much like Linux kernel you know the stable releases are going to be what people make of them. So help us with that but it is definitely not just focused on the build system it's focused beyond that in the components as well. So when we added recipes with access routes there was talk about task specific access routes. Task specific access routes was a realisation when we added recipes of access routes that things do change based on the tasks. The trouble is that some access routes need to be preserved between tasks so you can't change the sys routes from under the system between configure and compile for example that just will not work. Whether you whether there's a packaging sys route and a sort of a compilation sys route that might be something we could look at in the future there's nothing particularly hardcoded in the system that says they have to be recipe specific but that's certainly what made most sense at that time to implement. In the future we might do something like making host tools be recipe specific so currently the host tools are build wide and we might make those potentially recipe specific or something like that. We haven't found too many problems with task specific problems right now but it's something we certainly could think about in future there's nothing hardcoded in there related to that. So external tool chains good or bad practice um they probably have their uses in their places um I don't particularly like them I don't see that we really need them in most cases I do understand why they exist so I tend to ignore them but some people have done a good job of integrating them um a bit of the more mightest you I think um some people like them some people don't um but we in general we found people tend to move away from them um because you can in some ways I'd like to see the external tool chains built as sort of locked as a sort of locked estate for example rather than some of the current external external tool chain models but nobody's actually tried that yet um but yeah that they're kind of a fact of life the same as binaries on systems really. Can I explain the recent improvements made to Pseudo? I can I'm just trying to think how quickly I can do it. Yeah I think that was probably we'll come back to that one Professor if we've got time. So there's a question about why there's no way to strap up BSBSP layers using for example repo or sub modules. I think that was a contentious and slightly difficult decision we made early on in the project that we weren't going to mandate a way of doing that and it has turned into a little bit of the wild west with so many different ways of doing things. In some ways that's the strength of the project it means that people aren't locked into a particular way of doing things but I think we've always wondered about creating some sort of setup tool that would help with that in some way. It's there's a list of future directions that was published recently that the TSC had worked on the technical steering committee for the project and that is one of the set of issues of one of those one of those topics in there. So if you want to find out more information about that go and go and have a look at that document and ask us on the mailing list. But it's it's something I think we we probably do need to address at some point. That's probably one of those things in this next 10-year cycle but it's difficult because everybody has a very specific idea about what they want to do and how they want to do it. So it's by no means a simple a simple problem to address. So we're left with the pseudo question so I'll try and quickly I'll try and give a quick overview of that. The problem was that pseudo assumes one of the design assumptions in pseudo is that it assumes all the recipes can see but that that particular that pseudo is responsible for the permissions of everything in the system. So if you go and modify some files behind the scenes pseudo can lose track of things and get quite upset that those files have changed without its knowledge. You can't really tell or couldn't really tell pseudo just to focus on a specific directory. So the changes that we added were to allow pseudo to focus on one specific location and to ignore certain other paths and that leads to the database being a lot cleaner because it's not trying to track files that we don't actually care about but that means that certain recipes needed to be changed and so on. So it was a fairly late breaking problem that the issue that we had was that pseudo was tracking files which were then changing so inode numbers for example could be a file could be created with a given inode it could be deleted behind the scenes then a new file created that reused that inode and then pseudo was confusing permissions across those files. So the changes we've made are to try and address that. Okay so I think I've only got a few minutes left but I'll try my best to get through the the rest of the questions that are here. Can I explain what I meant by locked estate? This is where instead of bitbake calculating a particular hash for estate and then checking whether that hash exists in the estate instead you just tell bitbake that this particular task has this estate hash value and that's one of the ways in which the ESDK can work and so there's a locked signatures file it's a standard bitbake file you just write some entries in there and then it locks down the estate to that particular thing so even if the underlying system changes from under it it will use that particular artifact with that particular hash from the cache. So it's used extensively, no pun intended, it's used extensively in the extensible estate case, but it has it has uses outside of that as well that we haven't really capitalised on yet. Okay is there a way to overlay a layer.conf file with a bitbake pen like a recipe? Not really no, we've tried not to complicate things by adding conf of a pen to files or something crazy like that so I'm afraid not that's a really easy way of doing that. You can't pretty much do anything in anonymous python for example but there's no mechanism like a bitbake pen and can I give some pros and cons of using pocky directly versus creating a custom distro. To be honest most people want to create their own distro, it sounds a little bit crazy but in the context of your project creating your own distro conf file is fine pocky is a good default you could inherit from it and then just build on top of it that's what pocky tiny does and pocky alt config so there's examples of those there but you can just do that you know create your own distro and then just have nothing in it you can just inherit from pocky and then if you ever need to change anything you have the place and the mechanism to do that so my advice is that yeah pocky is great as a reference distro but inherit from it or create your own that's what the system's designed to be able to handle and is there a reason that the bitbake user manual is not part of the mega manual we're busy changing all the documentation right now over from docbook to Sphinx so I think that the bitbake manual will be included in the new equivalent of the mega manual in the new setup if it's not remind us and we'll see what we can do about that but yeah the move to Sphinx means that the documentation is a lot more accessible for people that make changes to and develop so hopefully that should be one of the advantages of it so I think I'm probably running out of time I'm not entirely sure how I see that in here but I guess to wrap up what I will say is that if anybody does have any more questions I've not quite got to some of the ones at the end there please do find me in the conference system and I'm quite happy to try and answer those and you know so please come find me of there is a doctor project slack channel that's accessible through the booth come find us there you'll find people who can answer a lot of questions about the project from there but yeah thanks everyone