 My name is Chuck. I'm a Boeing employee and I'm going to talk to you about design design a shared Linux using DO 178 C Yachto open embedded is sort of the substrate of all this in case I forget to mention that Just a little about myself. I have a BS in mathematics I've also done an insane amount of course working a bunch of other stuff because one of the cool benefits of working for Boeing is that they as long as you can justify it business case you can take whatever schooling you want So I am a perpetual student that way I am noted in the credits of several early editions of the Red Hat Linux Bible early editions because once I got paid to do this I didn't have any time to do anything I'm also the co-author of a old book Linux toys Way out of date don't waste your time. It's it was fun to write. It's a great book, but it's way out of date now I was the president now. I'm the benevolent dictator for life of the Tacoma Linux users group And then I've got 20 years in as a Boeing software engineer I was inducted in the technical fellowship last year, which I guess is like tech principal at other Companies and then if you flew here on a Boeing aircraft, you were probably using or the airplane You were flying on probably had a Linux embedded Linux OS that I was responsible for And we'll go into some more details here. So this is some of my early work the stuff on the right not the left This is Simone Biles and Mikaela Skinner sitting in the flight deck of a 787 I was stoked when I saw this I sent this to my parents my whole family because just to the left of Simone Biles a shoulder You can see the display in there. Okay. See the rectangles and stuff that is called the electronic flight bag I did a lot of work on that system. I knew that like the back of my hand. I jumped out of my seat when I saw that I Know the OS everything back and forth That was the first Entry into a concept called e-enabling if you go to Boeing's public site you can Google for it there's a whole paper on The electronic flight bag written by former chief engineer Dave Allen who he has since passed away due to pancreatic cancer tragic loss He was actually part of the standards effort on do 178 be and all that so he has been everywhere I I am I am the luckiest person in the world to have worked under him But he was responsible for that and that is the first time there's a insane amount of data You can probably imagine riding around on aircraft buses. It's insane, right and Starting in the mid to late 90s airlines were like it's expensive to run an aircraft How do we take advantage of this data and make it cheaper to run an aircraft? And I need to make my pilots not carry 70 pounds of paper We could use people revenue paying customers know that so that was the advent of the electronic flight back and e-enabling So basically extracting that data from the bus and making it actually useful There's an SDK for that electronic flight back if you work for an airline and the airline has purchased the SDK You can write apps for that environment and that is continued on to this day and so that's the old I've been working on Way cooler stuff. I'm gonna talk to you about our development environment now. Obviously. I can't talk to you about the proprietary stuff but Definitely Google for stuff. It does appear in the news here and there and I'm like that's really cool. I'm excited about that Let's see. So first question Quiz can software be certified true or false What are you guys saying true or false raise your hand if you think true Raise your hand if you think false Are you both right? It depends, okay, so you can only certify in aerospace, okay? You can only certify systems. I mean with was then you're talking about adhering to a standard, okay? You can only certify systems that means I cannot if anybody says to you I have a certified Linux kernel for aerospace. They're lying to you. They're trying to scam you out of your money You can only certify system So you can take and certify an entire Linux OS with a set of requirements the whole nine yards that system can be Certified, but then that's got to be loaded on to an airplane airframe with zone serve plans So it's a lot like I guess you'd probably say it's like building codes Or it's like this building was built at this time. So it's responsible for these particular codes so for the triple seven airframe my job has certain amount of difficulty for certification in a 77 airframe it's going to be different because that was a later airframe than to triple seven so the standards are different and a lot of the work I do is what we call multi-model and so we try to you know Do the Zen diagram vendor is I'm thinking down here Venn diagram the the the most difficult piece And it usually covers all of them so software systems are like airplanes airplanes are just system of system of system of system of system of system In fact one of the really important career paths that Boeing is system engineer and I'd been at Boeing so long I didn't realize that's not well understood what a system engineer is But you're the engineer responsible for all the boundary layers. I'm sorry Connectivity between all these systems. I am not a system engineer. I'm a software engineer I build a system I'm responsible for a system that fills a role here But I work with system engineers and these people are just incredible. They're they got encyclopedias inside their heads every time one retires It's it's a depressing and tragic event Updates You know version one version two that kind of thing. It's not just push it out Okay, there's a whole process for getting these things out software has managed like airplane parts We call them loadable software airplane parts just like a piece of metal. It's got a part number. It's got a whole chain of custody required to it in fact a lot of it is like I'll finish a release and then Like six eight months later. I'll be like has that been out yet and yes It's finally getting out to the fleet, but that's there's just a whole another apparatus that that sends all all you know All of our releases out to the fleet when we make a change or we update it We have to go through what's called a software change impact analysis So we certify it once and then obviously no one's ever happy you want more and more and there's bug fixes and all that So we have to go through a software change impact analysis to release an update to that going forward And these are embedded Linux OSes. I should also mention that Enabling this is a member. This is this is Making use democratizing information on the airplane so it's it's real time doesn't really play a huge role There are some real-time aspects to this but the vast majority of the Linux OS work we do is actually not real time Although it is safety-critical Let's see. Okay, so let me give you just a rough idea of what the regulatory environment looks like an aerospace We have in the United States at Boeing in particular. We have what's called an ODA. Yeah, not to scale It's really not scale. It's called ODA organizational designation of authority and that means there are Boeing employees Who have literally more power than the CEO? They can stop everything. They are called we call them engineering unit members EUMs They're like lawyers in the amount of training they have to get they have to go through a whole process They have to get it. It's called we call it getting the ticket This FAA has to they go through this panel interview and everything and you get the ticket and then you get your you have an EUM attached to a project and they go through they shepherd you through the entire certification process They have audits called soy audit stage of involvement audits Each level you have to meet a certain set of requirements and there's findings between them They shepherd us through that so that's a long-winded way of saying I am not a lawyer I know enough to do my job and I make sure that I have a really good EUM that I work with it I can ask tons and tons and tons and tons of questions and treat them very nice so Regulatory always goes back to basically government the Constitution gave us the Congress which gave us the Air Commerce Act gave us the FAA YASA has a similar history, but it's huge and I just didn't have enough space on there So but what the regulatory agencies are not responsible for telling anybody what to do They just they just create the objectives. They said the federal aviation record regular fars Just in very broad terms Make some statements Those are the objectives. YASA has similar objectives. The industry gets together with radio tele radio RTCA sorry RTCA and you're okay Those are the standards bodies that bring together objectives and consensus from the industry and those create these funny documents here that Are the approved means of compliance? So if you want to get up get a release of software into the field The happy the easy path is just to use do 178 if you have abject or any code You want to go through three twos if you want to qualify software tools You got to go do three three zero cetera, etc. Do 356a is a new emerging. That's software security called cyber security This is the first time by the way We actually have what we call negative Requirements so all the time philosophically you can't prove a negative right well guess what we got to start proving a negative now Do 356a now requires it These are all the things get up that a cert plan for a particular airframe says we're gonna apply to those standards So I think we were talking earlier about what standard applies that has to do with when the airframe is shipped out and what What standards the FAA wants us to and YASA wants us to apply to? so If we're doing something incredibly novel that even do 178 doesn't cover and it and a couple times we have What you'll do since that means a compliance doesn't cover what you're trying to do You'll actually write an issue paper and submit that to the FAA in the ass and say does does this meet the aviation regulations? As you see it and generally what you can Depending on what it is that that would probably ideally end up either as a advisory circular or it will end up in a standard At some point in the future and surprisingly there is I've there's one I've been involved in and there's another one that's probably gonna happen as well that 178 doesn't cover that and it's software Okay, so most of the work I do though is dictated by do 178 C when I started at Boeing is do 178 B My chief engineer was part of do 178 effort This is industry accepted guidance for satisfying airworth is airworthiness requirements the FAA does not tell you how to do it They just say what it needs to be industry decides how they're gonna comply. There's really two Important sentences that the dictate like my life. Okay, does the design adequately reflect the hazard? So we have a functional hazard assessment When business decides they want a new feature someone's got to sit there and do a Comprehensive functional hazard assessment and that's when they decide what level of hazard this complies with and then soft And then and then I'm sorry I jumped ahead a little there and then at the end does the implementation match the design That is the whole soft I mean if I were to boil down all the Certification work I do it meets those two sentences does the design reflect the hazard and does the implementation match the design It's called design assurance level. We assure the design meets this particularly level So for doubt we called the design assurance level. Now he has no safety effect. We call it operational approval Usually that's when a regulator or EOM just probably goes on a test flight and says looks good ship it Obviously when you're dally you can't do as much the higher your doll doll a And I understand in some other industry. This is inverted D would be the highest and so sorry about that Yeah, it's like wiring grades to right One odd is bigger than the 13th anyway So I usually work in the the minor so the minor and the bars are relative Levels of difficulty the big jump is from D to C and D. We have what's called high level requirements These are functional requirements system shall do this system shall do that You're not even specifying kernel level stuff. Generally. You're not really specifying kernel level stuff That's sort of implied that you're going to have a boundary layer between software and hardware And that's usually a Linux kernel Sometimes we'll say it will have some sort of replication that's occurring along this bus And so you'll have to say I have to have this driver that does this There are only 26 objectives only to have to be Satisfy with independence that means like hands off someone we have to prove somebody else satisfy this all the way up to See requires you do low-level requirements And that means like every line of code has not every line of code But you could practically write the code just from the low-level requirements. It's a phenomenal honor Especially if you look at the links kernel you peel out the drivers How many still millions of lines of code is you use K config to even peel it down even more But you have to have low-level requirements for every line of code. So it's a daunting task to do that and then major and catastrophic hazardous Hazard catastrophic Technically speaking you need to follow every code path with every possible input value. That's not possible. That's impossible to do It's physically impossible. So what we call is we have what's called modified condition decision coverage and you can make some guesses and assumptions and It requires experience and expertise to do that see just requires code coverage You just need to make sure all your tests cover all the lines of code So like I said, I work in mostly D&E. So this should just give you some flavor of what I'm operating in and I have not had to solve problems in the A B and C very many problems in the A B and C space going forward Cool, okay, so to meet the burden nothing I am telling you telling you is written in D a 170 a C as thou shalt do this Okay, so if you look at anything I'm going to talk to you about I there's nothing that's in there that says you have to do it this way I've been doing this for long enough and tried a ton of stuff and I know it has worked. I know it has not worked I know it is painful when it's sitting down with an odd with a regulator What is not painful in it sitting down regular so but the big point point here is the burden of proof is on the engineer So make it easy on yourself. Okay, make it really easy on yourself simpler is a lot easier to prove Everybody's really smart there. It takes a high bar to get into Boeing and to do this stuff They're smart, but they're busy. So you need to make it simple so you can digest and move on and understand the stuff Regulators and software engineers. We are human, too So it requires, you know, you don't know what someone else is coming to the table with so you have to be able to You know work Work with everybody and then one size, you know, there's more than one way to do it Right the one size does not fit all so what I again what I'm telling you is not like the way you need to do it I'm saying this is what has worked and there is a lot of software flying around an aircraft that has used this approach That's all I'm saying Oh wrong direction, okay So human error is a symptom of bad design and now here. I'm talking about how you manage your development team Okay, so if your workflow Is well meaning people are causing problems Don't blame the developer. That's probably because you design your workflow wrong and it's a good opportunity for improvement So the requirements for what I'm doing here after support hundreds of internal developers tens of tens of embedded distributions and it's got a support distributed application layer development meaning there is Application level functionality within these embedded OSes that are not being done by the embedded development team But the oh a embedded OS is an integration point. Okay. It's got to be simple Turnkey I got to be able to tell a developer within a couple of steps a brand new developer a couple steps How to deploy and build it's got to be repeatable two developers have to get the same result Okay, this isn't just creating the same binary But two developers should be able to work from the same set of documentation get the same result because I use a Mac some Boeing engineers use a Windows machine I Even one of one engineer who uses a Linux they got rid of the Boeing image and use Linux on their desktop It's got to be consistent, which means If I am pulling in Something that someone decided to use gradle or basil for their Application level build development. I have to be able to have a consistent way of pulling it into embedded build Bitbake is great for that. Right? It's a thin wrapper around whatever you want to use to build your stuff Okay, it's got to be consistent and work and dovetail into the existing system I can't have a third-party application developer do something that's going to have Ramifications and cause me to change the build a lot. It's got to be extensible We have to add things into think of it as a pipeline. It's got to be extensible have to be able to add things in Testing if we want to do automated testing, it's got to be qualified. So we want to be able to put that into the pipeline We have to do configuration manage meaning Project leaders have to be able to turn dials and knobs on the development environment And that's just got to you know when a developer does a poll that's got to make the changes in their environment without Having to say hey everybody send out that email, you know that email, right? Hey everybody do this today and Three people didn't see it to didn't understand it and then you've the rest of your day shot because they Are confused and broke something You can't have that and of course got to be traceable change requests have to go all the way back You have to be I'm sorry go back to your change control Whatever change control system you using you have to be able to show a regulator that you've done your code reviews all that kind of stuff Okay, so let's just do a little step back how we build distros this works for embedded and Red Hat the whole nine yards when I started at Boeing. We didn't have Yachta open embedded So I used red hat and we did an RMP RPM based embedded build Obviously, we didn't just use The RPMs that came from Red Hat and Red Hat Linux. We just use the concept Step zero, of course is you have to build your native tools Because you do not you know, even if you're building if you're on x86 building for an x86 target You can't use the compiler resident on your system because that's going to build application stuff That is generally tailored for that environment, you know, so you want to build native tools that allow you to abstract that Then you want to you got a bill of materials of all the individual application level stuff You want in your environment your Linux kernels one of those you build your intermediates then you create a root fs You'd think of that as like a sparse loop back device something like that You mount that install the packages into there sort of as a charoot or a fake root You generate the image a lot of times what you'll do if you're if you're building an installer We're installing to bespoke hardware, so we don't need anaconda. That's not necessary Installers are actually really simple because we know everything we know the the hard drive We know where everything's located so it's just a rote step-by-step script So a lot of times what we'll build is the final image will be a Little Linux OS that the init process just runs and install script and embedded inside inside that ISO is the blob That is the embedded OS that then gets installed to the system That's usually like I don't want to call it a firmware OS But a base OS that then in the field you can data load on top of that to a fully functioning operational environment Okay and then Round 2017 this was mature enough that we said great. We can and it did way way way better I could go on for days about this About all the things it does better than the system that that I built and a lot of people maintained with him Boeing So I it was did it did a couple of year-long test run and it was like great We're gonna go to the octa open embedded room. I pulled this from the octa open embedded documentation. I Don't really have time to go through all of this. I definitely recommend they have Unbelievably good docs and they take fixes. So if you find something you want to change in their docs send a patch Pack, you know, it's basically the same thing. We were doing package base builds with the ability to turn dials and knobs We did it in spec files and then you could do you scripts that would temple the spec files This is just much cleaner and much nicer plus there's testing built-in CVE checks built in You can look at your build graph You can come up with an SVG of your build graph, which is still difficult to read and it builds your SDK So this is just so much. Oh also, you don't have to build any of your native native tools, right? You just say which machine you're building to and bam, that's all done for you It's like who wouldn't want this. So yeah, it was total no-brainer to switch to this. It's great Just in for those of you who haven't used yaku to open embedded This is what a bit bake recipe looks like it is I call it Internally, I just say to anybody who's never heard of it. I say it's just a thin a bit big recipes It's a thin wrapper around whatever you're using to build your existing Code make gradle ant maven all those kinds of things just a thin wrapper As far as I can tell I mean it creates an abstract syntax tree So it looks to me like it's turning complete that you can do whatever you want in there You can because it does you can inline python and shell. This is a very simple one You notice that the source. She or I points to the source code. We'll get to that There's a check some so your project repository is going to know if someone upstream Metals with the bundle that you're trying to pull down and of course license auditing. That's really important You need to know when something changes especially as an OEM you need to know that you're not going to get your company into trouble So that's a really important feature and then of course if you too want to go home and build your own Linux environment Linux OS rather and then run it and play around with it. You can do so in five commands and a few hours That's pretty easy to do. It's not that hard And then once you've done that It's really easy to just go back and look how it's configured and then you can build your own Distro I typically do that when I want to spin up something. I'll start with like Like a rocky Linux VM and just build my own and then I'll just go off with with my own Linux I Have a personal project that I've kept hidden on github But I probably should open it up for my own Linux that I that I curate and play with using using using this Okay, so what did not work? Remember I don't mean to slay anybody's favorite tool. I'm not Attacking anybody or anything especially get sub modules you you want to see people get upset you talk about get submodels, right? It's there's only two sides of that vim and emacs was the same thing out of tree We'll talk about that out of tree source code talk about that and hybrid builds I'll explain what those mean here in a second. Okay, so remember I have to serve a lot of developers I have to make sure they're productive not wrong with submodels But every objection to sub modules is overcome by more complexity And it's usually by people don't understand they're telling you to do something a lot more complex And I don't want to have to create like I love get it's it's it's magic It is but I don't need every single developer to be an expert and get to be able to be Useful I also it does val violate the do one thing or the single responsibility principle it obscures the boundaries between local and upstream code which Really makes me nervous for safety critical stuff. I do not I do not feel comfortable with that at all And really at the end of the day, there's it's it's it's too little benefit What I'm showing you here, you know, hopefully will come clear as a curated environment Nothing about get sub modules makes my life easier. It only makes it harder Okay out of tree source code now if you're used to Meta layers, you know that it's all out of tree source code It's a bit big recipe with the URI that points to a bundle and that's awesome. Don't ever change that That's great. Okay, because no one's gonna get all those cats to move in one direction and put their source code in one repository That's just absurd except inside of a company like Boeing Where you have embedded builds that have been decomposed into hundreds of bit-baked recipes But all almost interrelated maintained by not hundreds of teams maybe two or three teams tops When we do releases tagging and branching and tracing and coder views and standards and all that You can't imagine how difficult it is if you make me curate that across hundreds of git repositories. It ain't happening Okay, so I've seen people on the Yachto mailing list say do you people really do this? Yes, we include all the source code Decompose at the at the base is the bit-baked recipe and then a sub director and then all your source code there I'm promise you makes like felonies and we tried it the other way and it's very very painful to maintain hundreds of git repositories It's like the Wild Wild West and it makes your job a lot more difficult. So outside Don't ever stop doing this inside you there are good reasons not to do this and to have entry source code and many many many many many Many thanks to Richard Purdy. I sent a desperate email saying we found that when you made a change to certain source code in your tree It actually didn't trigger an expiration of the of the cache And he's like, oh, you know, I was testing something about that try this patch. Thanks. That's exactly what I needed And it solved it for us. I'll go more into that in a little bit. Another one is hybrid builds. So when we were testing If we were gonna go which way we're gonna go Yachto open a better just stay with the the RPM package base build environment This is where we do half of it the base layer if you will With the Yachto a bit bake build then the rest mostly C make but also Gradle And then there'd be a script at the end that would smash that together into a loadable software airplane part L Sap right? It's fine. It's awful. Don't do it So it was necessary at first because you know, this is aerospace and you got to go baby steps But once we're like, yeah, okay open and better will work and this is great This is awesome. Just it's time to cast that aside. Don't don't do hybrid builds. It's sick go all in or all out Okay, so let's talk about what did work virtualization is a key tool That's the most important enabler and I mean virtualization for the developers and for test But that's the key enabler that enables like everything else. So mirroring there's two types of mirroring I'll talk about pinning. Yeah pinning shimming Wrapping it together in a wrapper and then building everything from source and recipe generation Take a second here as software engineers We all play we all understand the difference in mechanism and policy, right as someone who's building something you want to build Mechanism you want to let the user decide policy and this is a hotly contested thing in the Yachto open embedded world I think Richard to his credit has done a really good job preventing policy from invading into The open embedded and Yachto ecosystem because policy is not one size fits all mechanism should be generally one size fits all But policy should not so my motivation for a lot of doing this talk is when I need to say hey I think we should try and do this or add this to the code base I need to make a case for why this is good mechanism not just my pet policy that I'm interested in so that's one of the reasons Why I'm doing this. I hope I can at least make a case or at least start the beginning of a dialogue in that And I have unsolved problems at the end so it ain't perfect. So let's talk about virtualization Let's start off. Oh Yeah start off with there. You have a baseline set of features that would be an image build so Bitbake build and just be a base image you can imagine It's just a you're not actually building an image directly But you're inheriting it with other other with with other images you add operational features These are the things that the airline customers expect us to have in the embedded operating system You add those so the operational features image inherits the baseline features Okay, and that is where you build your flyaway image and also it ain't that hard you can build an operational appliance for that Yes, we have built internally BB class functionality You there's an image type LSAP lot loadable software or plane part and it will output I'm sorry We have an image type LSAP that will build LSAPs directly using air ink 665-3 LSAPs That's a mouthful, but you can order this back and it will generate LSAPs directly straight from Bitbake builds We had a BB class functionality for that and then also we added a BB class functionality That we do on open source to generate OVA appliances So I have sent a VP who is knighted very very intelligent, but not a software engineer I said download our build and double-click it and as long as they had virtual box installed bam They had our operational appliance running. It was really cool and that's Yeah, okay, and then you can take your baseline features add on developer features underneath create a build dev a plant And by the way baseline features would be like the kernel G-lib C anything else you could imagine would be common between those in a non-artos Environment, so you add developer features use your imagination, right? Whatever you find that you want in your development environment We get requests for it all the time What what people want added to that and that allows you to use vagrant and QEM you At the developer desktop now to have a development environment that looks a lot like the fly-away environment There is a provisioning stage that vagrant allows you to do to Internally clones your project repository set up your environment build an employer identical for all the same build development of appliance, okay? We build in build bot into that. It's just a service that started so our our build bot Infrastructure is completely serverless. We when we make an update to the build dev appliance my DevOps guy And I go is the time to kill the bot? Yep, we're gonna kill the button. We go in there We just delete it the only thing we care about is that? enterprise hosted post-gray sequel database goodness that I don't need to worry about someone else gets to wake up in the middle night for it and Start it vault provides the credentials spam off you go We don't need to worry about it so I do not need to worry about our building burning down Which it almost did you can check in the Seattle Times the Boeing headquarters building? I came out and it was on fire and That led to a massive change in the way we manage all of our software now It's all fully service service because of one of those reasons. It was a famous one if you see that Yeah, check just just look at the seal times you can see it So the developer technically speaking could do system CTL start build bot if they want to they don't they don't know about it They're not told about it's no big deal, but yes, they could start build but we have vault that allows us to manage credentials. Okay All right, so that is like the baseline substrate that we have the developers working in Okay, let's see. Oh, and we build we started out with a boon 260 for us sort of our abstract environment to build this And we quickly built our first dev appliance using pyro in 2017 Then we use that dev appliance to build a thud appliance These are by the way the releases we've used as flyways and then we use the thud appliance to build a hard-knock appliance You get the picture here. Okay, so we threw away the Ubuntu and we don't use anything, but our own stuff going forward And then the next one of course, I think it'll be Kirkstone We'll see where things line up things things move very quickly and also very slowly in aerospace. It's hard to tell Okay, so the first thing is this is not Controversial we're just using an efficient tool within bit-bake. We use the premarers functionality so that our appliance is perfectly Purposely tuned to disallow access through the Boeing Enterprise proxy Unlike a lot of companies Boeing has a proxy in turn internally. I can't SSH outside. It's bounced off I call it the great firewall of Boeing And we've done even more to turn off you can do like go through the web and it goes through a giant massive proxy But we've even turned that off in the build dev appliance so that when a developer does a build It can't go out and grab source bundles. We've set the premarers variables. So it hits our own internal Source registry It's for good reason, right? Even though we in the project repository we check some everything and we know if someone's meddling with it No one's got I know I can I'm comfortable I sleep well at night saying if developers pulled it from upstream It's not that big of a deal. The problem is is that in aerospace. I am Responsible for reproducing that build as long as that airframes in the air How long is a 737 flying in the air think about that could be as much as 50 years if there's MRO, right? so That has to be repeatable which means upstream source tends to disappear It drives me insane the way people do like npm packages No, just just update to whatever the latest is and hope you don't have a Bitcoin miner in your in your you know application that's happened doesn't it No, no, we pull down the sources and keep them locked up in our own registry For a lot of reasons, but the big one is like we have to reproduce that build Forever basically as long as I mean I'll be long retired, but someone else has to The other one is we want to mirror the upstream repositories too We Prefix to make sure there's no confusion the word mirror hyphen to these But it's not it should be pretty obvious what's happening here. We have a script that will just pivot It's only a project admin does this we have a script that pivots Creates a bare clone and then pushes up the changes to our own internal mirrors. So they are a bit wise identical It's not a daily process that runs The whole purpose is and the next slide will talk about pinning But the whole purpose is this allows us to very carefully track all the change and and account for all the changes occurred These are cloned locally To the developers build build dev appliance. They're pinned on a per distribution for internal distribution basis We'll talk about painting here in a second and it's updated by script by the project admins on a periodic basis We review changes between updates. So let me get into. Oh, sorry. Let me get into what I mean by that So pinning is a really important aspect. We used to have our own homebrew pinning system Big thanks to Alexander Knaven who I hope I pronounce his name right who came up with this setup layers Tooling using that when it says it's a really cool tool I've got a patch approved to make that item potent So you can run this as many times as you want and it doesn't make any changes unless you need to so what does it do? Your setup layers that Jason what you're gonna commit to your project repository lists all of your layers that you include in your build All your mirrors that you're gonna use in your build So setup layers will automatically clone those from wherever you're hosting them internally get lab for Boeing It ensures the remotes on it point to the internal mirror So make sure that a developer didn't go in there and repoint the remote and then do something It'll actually fix the remote so it makes it look the same It checks out the mirror to the approved commit hash. That's what I mean by pinning We pinned to a hash so that our paperwork says this build was done with these hashes It's one of those things where it's really easy to explain to a regulator It's all built into the wrapper that developers don't even know they're doing it They do a build make sure these are checked out checked out to all the meat of their clone. It's checked out to this pin No thinking required and no error required. It just does it just works so a project admin on a periodic basis will re-sync from the upstream Mothership to the mirror and then Look at the setup layers that it's all done my script look at this Update the setup layers json file with the new head commit with the new hash at the head Okay, so now you do a get diff on setup layers json. What do you have? You have a before and after right and now for the change control board You can just show all the commits in each repository and review them one by one And if you do this often enough like depending on how quickly they're developed how quickly upstream is developing It's usually like every week or every two weeks now hard not as End-of-life, so it's I don't think there's nothing coming in But it's just a then at that point you just Go through change control board you pass them around to the senior developer and say this is gonna be a problem so yeah, sometimes it is and We'll do test builds and stuff before we commit the update to set up layers at json And if there's a problem then we can override it with a BB pin, right? We can override the recipe with a BB a pen. We don't like what happened Maybe they maybe just a well-intentioned bug fix update from 1.0.1 to 1.0.2 still causes problems Okay, we can assess that and we can BB a pen down and patch and do whatever we want. So remember with open embedded in the octo The the distro always wins you can override anything below you, but you cannot override BB class changes, right? If there's a change in the build system, you cannot do that. So we actually have a way of managing that we call it shimming I'll talk about that here Okay shimming This is basically in your project repository. You're maintaining a set of patches For your mirror repositories. You're maintaining those and you're adding you're you're applying them obviously nobody's manually applying these You're applying these at build time so that Anything that you have decided doesn't work quite right or you're still testing you're not ready to push it upstream or that kind of thing or more importantly For example, there's some stuff we had to pull from Kirkstone into hard not Like CVE check, right? We needed to pull some fixes to CVE check into hard not well We need to we need to pull those in as shim patches hard not is of course ELL and so I'm I've considered becoming maintainer, but I am I'm really busy and so Maybe I could talk to Richard about that at some point, but I know that so this allows you to maintain a set of patches there's also like system CTL some of the system D functionality is actually Recreated and we actually had to add some system D functionality that we are we're gonna open source a bunch of these So another side trip Boeing has a really good Ospo now, and they have given us approval now to push changes back upstream So when I find the time I've actually got a bow wave of changes We're gonna push up to the project Things that helped and I think that will help other people and then of course all this patches manage all these patches are managed by Script we'd never expect a developer to apply these it's that's just fraught So there's two schools of thought on how to apply these patches approach one Your script can just do a hard reset and a force clean right and it guarantees that you start from a clean slate And then you just apply the patches in regular numerical order Obviously you want to you want to enumerate your patches right and by the way you can this is a directory layout Anybody who's written bash knows you could do this and you could apply these in five lines of bash code It's not that hard and that was the reason for it It helps you a lot that way the other approach So I'm the type of person who never felt comfortable getting a roof mounted bike Rack right and you know why right? I just don't trust myself So I'm more of an approach to kind of guy right because we've all seen those pictures of the oh Tin can your roof and I love my bike So I'm an approach to kind of guy because you know, I'm a project leader and a lot of times I'm the one mucking about Fixing something in BB class functionality or applying something and I'm gonna get tired late at night And I'm gonna do a bit big build and go oh no I've killed all of my work and so I don't do that. I don't I do approach to so that in the wrapper We have all the developers uses the approach to approach, but also there's an error condition right so One out of 10,000 times something will go wrong for reasons Who knows where patches and I'm gonna explain what this is in a second But where patch fails to apply so we detect it and we say to the developer run the command and approach one And then just rerun your build it's like literally copy and paste it and that that that gets you out of trouble That's your get out of trouble thing also tells me. Oh, yeah I know how to get myself out of trouble when I'm working on BB class or core functionality changes So patches are stacks. It's just a stack So the way this works is approach to is you want to reverse apply the patches in reverse order So you want to reverse apply three two and then one, but you don't want to look at the return code You just ignore the return code from there whether it reverse applies or not It doesn't matter then you want to forward apply the patches in normal order And you do want to look at the return code there if any of those failed a patch you have a problem That's when you just exit with a non zero return code and then say hey Run those two commands from approach one clean yourself up. You're good to go I don't want to do that automatically because it's probably me who messed up it's all it's usually me who messed up and Maybe I'm just too tired at the moment. So that's the approach for for shimming This all comes together this is where mechanism meets policy. So, you know This is where the mechanism meets policy So this is an example Where you're using your build dev appliance the the wrapper is just the BB command. It's obviously in your dollar path It's your your build dev appliances provision that in your path BB passes through the bit-bake command Maybe we're gonna say I want to know the build graph for whatever this image is operational image minimal Whatever who cares with that image. That's just a particular image, right? So in pseudo code what this script will do is they'll do a pre-flight check There's always things you learn about running a virtual machine on desktop environments for example virtual box if you Have more space Free space in your debt and your virtual machine than you do in your desktop environment You'll actually blow up your storage medium your VM DK. So, yeah So you want to use periodically zero free to to to clean your your image So we want we have just every time you learn something you can add a pre-flight check to this to make sure you're checking for common error cases Properly provisioned for example is one of those Yeah, I'll get to it here in a second There's some provisioning here. I'll talk about and of course sufficient resources one of them This will run the setup layers command to make sure your mirrors are cloned and pinned This is the command line that it runs project conf would be your project repository your project Repository was cloned in the provisioning stage in your build dev appliance manages your simpatch shim patches for you This is sourcing your tooling into your child shell environment So most people are familiar with this one. Oh, we in it build emph rights. That's the one you dot in That's the one you source in right sometimes if you're doing something exotic or weird or you just want to use the the build tools that the Yachto project crates you can sort it you can set it to source those in first that's this is all project admin level stuff the developers don't know about this and then in this case then the last thing you do is you you source in your in it build environment you give it your build directory and Probably less commonly known is you can tell it where your bit bake repositories if you use pocky Repository then you don't need to use this. It's it's all done for you and then of course This also manages your pass-through variables now remember we Our main flyaway deliverables was called a loadable software airplane part These are uniquely numbered even though developer builds a desktop build. It's got to be uniquely numbered You cannot ever have the same so we use developer initials mine would be caw So when they first log in dear build dev appliance, it'll ask you for your name and your initials And they'll save that as dot files in your home directory. No big deal These are pass-through and also another thing that will be saved in your dot directory is what we call a tail It's a monotonically increasing four digit Alpha and numeric number except you can't use I oq and z So we had to build some light build I actually wrote a bash function that will increment Yeah, zero to nine a through z four digits, but skip I oq and z. I did it in like 50 lines I was I was impressed myself so So all of those things are passed through as pass-through variables That's all managed by this BB script and anytime we have to add a new pass-through or anything like that That's all handled by that So you remember we're we're not burdening a developer with any this day one developers Just like BB bit bake image go in fact You can just say BB bit bake and it builds all the images so literally the first thing we tell developers you on day one is Documentations like five lines clone deploy put this in the coldest room in your house over You know why I'm saying that But in a close room in your house run and then in the morning as long as you didn't run out of RAM starvation You'll you'll have a pre-cached build and then after that just keep your shared state cash You're good to go. We have not experimented with a shared state a read-only shared state cash server yet because It will hide builders. It's getting a lot better It's I'm assuming it's eventually gonna be perfect But at hard not it's not perfect and so obviously our nightly builds always always clear of our official builds through build But are always clear of cash I think that's it. Oh, yeah And the last part is where it just passes through commands an argument so anything in the bit bake, you know Anything you would run, you know on your desktop to do this here and then any command you run after that They just passes that through so you can run any any the octop and embedded commands that you would normally expect to run Let's see so develop a workflow pretty straightforward on their desktop Get clone the project repo prior to that, you know There's there's some pre-step to that where like if you're on a Windows machine You gotta you install git bash and stuff like that if you're on a Mac, of course You just get it for free if you just accept the developer license, right? You do need to install vagrant Then it's just up and if you're Sometimes I will have to email developers say we changed the provisioning a little so just do a vagrant provision All of our provisioning through vagrant is item potent We do use the persistent storage plug-in and vagrant We actually added a vagrant plug-in that automatically Manages the plug-ins as well so the developers never have to manage their plug-ins either It's literally just clone repo as long as you have vagrant stuff just vagrant up done. That's all done for you Oh, and you got it. You got to put your keys in right? That's you got to you got to do your keys I can't do that for you and then either using vagrant. You can ssh in use Vim I do that a lot or use visual code through the ssh plug-in It's the same thing right and then you just run a command if you want to do your builds be BB bit bank or BB bit bank in the image you want to build straightforward Pretty easy okay The other thing is build everything from source Don't accept binary blobs ever it's not possible. That's a panacea, but because you know there's licensing stuff or a previous developer Compiled a version of Python 2.4 for 32 bit and never left instructions on how to recreate it I mean that could have happened. You know who knows so someone does have to be blobs But you don't want blobs because you want to be able to leverage the full power of a bit big CVU checks a huge one right and then manifest you want to be able to tell what goes into your build Graph is you want to be able to tell what your build graph looks like particularly like why is this being pulled into my build? Well, it will tell you all the way down to the do step which is Mime-boggling and then of course reusability. I want to reuse that source somewhere else and another build You can't do that if you have a blob especially for other architectures Developers initialize their share of safe cash overnight to be pre-cached So it's not a big deal that everything is sourced. You're not saving I've had people try to really strenuously tell me but you should have binary things so you can just compose a developed environment They run off and like you are not solving the problem. You think you're solving. You're actually making it worse It's everything you're telling me is solved by just Setting your third state cash overnight and one 12-hour build and then you're said, you know Maybe delete it once a month or once a quarter if you're worried about hiding build errors But otherwise none of none of this composition stuff That's why I'm not a fan of Docker for this kind of stuff composition hides problems if you build from source in your environment I have an easier time telling a regulator. I know what went into my build. That is a hugely important piece so not a fan of composition tools and then of course nightly builds always start from zero cash Okay, and then Second-a-last recipe generations are really important thing because everybody's got their own favorite Tool maven great at the job at C is like the worst offender There's 75,000 different ways of building Java stuff and everybody's got their own ways So like we've had to build our own template scripts that will figure out the transit of dependencies and create m.2 directories inside your build by about your Your sys route so that all the dependencies work. It's it's a real pain. So you want to You know obviously npm is built in a dev tool recipe tool You know in my perfect world there would be a lot of work put into automatic recipe generation for literally every build tool Basel should also be one of them basels Really popular? You should be able to take any existing open-source project run your template tool against it and it should generate at least a Reasonably close bit bake recipe that will work very in it very labor-intensive initially This is a lot of work. We're doing now is to build a library of Tools to automatically generate recipes. I do want to open-source a lot of that as well I think I think that would really benefit the community So looking for help if anyone wants to work in that. Yeah, let's see And then unsolved problems, okay, so distro switching tens of distros right I talked to you about that right and so that means if I use kernels such-and-such in In in this distro and kernel such-and-such in this distro It's not going to be easy to switch from that environment to that environment So these are problems. We haven't solved yet another one is like okay Maybe I do have the same baseline features, but I have a different slightly different provision development environments So we want to have a sort of a seamless way for developer to run a command to switch their provisioning to another Distro to focus on it. There's not it hasn't been needed to solve now much because as you can probably imagine developers are sort of attached to Distro they don't need to nimbly move between distros But the day is coming will they where they will need to nimbly move or you're gonna have you know Like I'm I'm a technical fellow now, so I'm expected to be cross-sectional So I do need to like oh, I need to reproduce that and see what you're seeing at how And so distro switching in your build dev appliance environment is an unsolved problem that you know We need to tackle another one is internal meta layer reuse so I talked about entry code It solves a lot of problems It also creates a lot of problems when you point your IDE at a code base. That's in tree Giant code basis in tree you're gonna get a lot of the red squiggly we call it the red squiggly problems Right because if you've decomposed into separate libraries Well, it's not gonna reach across to this other bit big rest because one of the cool things about Meta layers is that you can rearrange where the recipes are located without breaking your build right? But that will break your IDE connectivity, so we've had to do some really hacky stuff Like put some dot gradle files in certain recipes dash foo Directory to tie together at least to some point But I think there's a better way of doing it and I'd like I'd like to explore that there's a better way to do that and then of course Let's see Yeah, project read actually I was just talking about project products of code edit internal meta layer reuse would be if we do have a situation where a Large entry code base in a meta layer that is used for one distro There's another internal distro that will also pull in that as a bb layer. They will have to patch stuff when there's a disagreement BB append patches on their side. I Would also like to be able to point an IDE at their Repository and get a coherent picture of what the source code looks like right now We'll say just do a bit bake all the way up to the do patch phase and then point it at the sys route And you'll get a full monolithic source tree there that will assemble it that will compose your source tree there I think there are better ways to do that And also for a cert reasons In order to stay abreast of like the Linux kernel once we get up to higher dals We need we need to be able to have massive amounts of patches But have an IDE look at that look as if it's a coherent code base. So having an IDE that will look The like I called the lens use look as if you're looking at a coherent code base But really what's happening is it's actually looking at a pile of patches. That's actually there's a need for that For this kind of work Okay So that is my email address We are hiring software engineers. We are hiring Linux software engineers We are hiring embedded Linux software engineers. We are hiring architectural Linux embedded software engineers Okay, so there's some really cool opportunities. I just put the general URL in there If you're actually genuinely interested send me an email And we'll talk okay, but I'd like to hear from you if you're interested in doing this kind of work I love it my father works for Boeing my father-in-law retired from Boeing The guy who shepherded me through to the fellowship his son works for Boeing No, it is not a nepotism party. It's there is a very high bar to get into the company It's just it's actually a great place to work And so it really attracts people and people don't want to leave when they get there too so with that I'm sorry if I took all the time Entertain questions Just wondering if you've been playing around with the automatic S-bomb generation out of Yachto so far We are back porting. We're using hard not now So we are back porting the S-bomb BB class functionality like actually there's probably like right now a developer's working I mean literally right now and that'll be a shim patch that we shim in yes. Yes. Yes Yeah, we have a DO 356 a is the cyber cert and we're getting a lot of pressure now to We can show I'd be interested in any feedback you have absolutely because I think Josh is interested as well as is Richard Okay, and so anything you're seeing and if you see things that are weird we want to know. Thank you. I will Any other questions You talked a lot about shimming and I was wondering if you considered and probably discard consider and discarded the option of Given that you're already mirroring the repository to just commit on top of the latest of the repository. So have your own Committed patch you on top of the branch. Yep. Absolutely. And then you just rebase underneath it the whole time Just push it up. Yes. Yeah, I there's nothing wrong with doing that. I have absolutely no problem with that I think it's a perfectly good way of doing it Again for what I am trying to keep it so simple and easy to explain to the regulator that this mirror is identical bit for bit I can prove it except for the name is slightly different. It's along those lines And the other thing is like well actually no, I that's I mean that's really kind of the case That does it. Yeah. Yeah Yeah, we got one more time one more question. Do you mirror your vagrant plugins and the kind that those pieces of your dad environment? Or does that come straight from the internet? No, no, no nothing ever come straight from the internet except like a project admin We'll get it in there and we'll we'll do all the due diligence required to get it in our registry But literally nothing is pulled from the internet when a developer does their deployment Including they don't even it used to be there to apply that but they don't even do it now We like we have a vagrant plug-in that that applies the plug-ins. So that takes care of all of it for you All right, thank you very much. Oh, do we have one more one more question? Yeah, it's your distro binary reproducible Okay, so even with yes but So even when you have when you build a root FS Even the the times are never like if you'd have a multi-threaded build It's impossible to have a shot 256 some reproduce each time. So they're reproducible to the extent that a Thread may land sooner than a thread in a different build. Yes, but otherwise. Yes, we have all the reproducibility turned on It's not a priority for us because we don't qualify any of our tools at Dell D and E So we only have to show evidence of functional test passing. Yeah, thank you. That's a very good question Thank you. Okay. Thanks. And if you want to talk to me afterwards, I'll be happy