 Hi, everyone. I'm John Mason. I work for ARM, and this will be using Yachto as a method to upstream maintain and track patches. Hopefully that's interesting for everyone. If not, feel free to run away. I think this is the last slot before the coffee break at the end, so you've almost made it. Sorry. So what's the problem? So the problem is SOCs have multiple open-source components. You've got the kernel, you've got U-boot, you've got ATF, you've got graphics. You may even have some other open-source pieces and maybe even some proprietary pieces that all need to fit together. And the parts that are open-sourced or that you want open-sourced, you want to be able to upstream them and maintain them. And hopefully it's simply as possible. Now would you believe me if I told you that using Yachto as a way to do this would be simple and easy? Does anyone believe that? Anyone think I'm a liar? Okay, cool. Well, hopefully I can prove you wrong or I'll make a humongous fool of myself. One of the two will happen. So this is the board that I used as a test of this. It's a friendly arm and I'm sure that's some kind of copyright infringement, but it's from some no-name company in China and it was super, super cheap. I think I got it off Aliexpress for like 15 bucks. But it has a core A53 which I thought was pretty powerful and something to kind of play around with. And it has a vendor-supplied Ubuntu and I'm throwing quotes around it not because it's not really Ubuntu. They took the Ubuntu images and then threw their own kernel and stuff under it and still called it Ubuntu. It's running a 4.4 kernel U-boot of 2016.01 which is pretty ancient but is not terribly ancient. I don't know. And I use it as a guinea pig. So the groundwork, how do we get this actually working? And step one is to conform it to Yachto so we need to get it working under Yachto, migrate to generic sources, split into patches and push the patches. Seems pretty simple, right? So getting it to work under Yachto. So three steps, commit it to get trees, write the recipes and verify that it actually works. So commit the code to get trees. So some vendors are super cool and they will actually comply with the spirit of the GPL and give you a nice get tree and maybe even host it somewhere like GitHub where you can get it very easily. And other vendors, they just give you a tarball if you ask nicely and do the secret handshake. So if you have a tarball, you can just throw it straight into a get tree. You don't need to do anything special because the fun part comes in a minute. You actually just look inside there and pretty much all major open source projects in the make file will have the version. And if you just look inside that version, you can find the upstream thing. It will be the tag that matches the version. Almost all of them do this and it's really, really easy. So writing recipes. So this is the hardest part and if you've used Yachto, you could argue it's not that hard. If you've never used Yachto or think Yachto is very difficult, then it is difficult. I think I just talked myself in a circle. So for every binary that you have, you need to essentially wrap them all together. So you have a recipe for every binary that is created and then you have a WIC file in Yachto parlance and that will put it all into a single bootable image. Oh, by the way, any questions at all, please blurt out. Otherwise, I'm going to run through this super fast and I might run over the question that you have. And there's a microphone somewhere. So does anyone not know what a meta layer is? Do I need to cover what a meta layer is? Raise your hand if you would like me to cover what a meta layer is. I see no hands. Okay, cool. Yeah, so creating a unique meta layer is probably the easiest way to do this. That way you can lock into what's already there in Yachto and keep all your pieces in the same bucket. What I think is a huge benefit for this is you can actually have everything self-contained in that and you can also, if you're doing it for a unique series of boards, so like the board I just showed you has the same SOC on, I think it was 14 different boards. So all it takes is a different machine, which is using the exact same git trees for the kernel, for U-boot, for ATF, and everything else. And the first setup took a few hours. Each additional one took minutes because you're using the exact same thing. You're just changing a few names. It's pretty trivial. And if other boards based on the similar SOC from different board vendors, it's almost trivial to add that. So the board that I referenced previously, the Samsung Arctic is using the same sock. So it should be almost trivial to throw in support for that because it's going to be the same kernels, same configs, same U-boot, same ATF, all that stuff. So here is essentially the recipes that I did. So kernel recipe, it's pretty straightforward. You can see that the source URI, that git tree is essentially a direct copy of their git tree, which I did simply in case they decided to move it somewhere else. Couple configs, a source rev, all that stuff. This is pretty standard kernel recipe. And then U-boot recipe is very similar. And here I'm actually still referencing their git tree. And then this is a fun piece. So not all the vendor may be legally obligated to supply the source, but that doesn't mean they always supply the source even when you ask. So I emailed them and said, hey, I see this L-loader is GPL. Can you give me the source? And then I got nothing from them. And then I emailed them again and I still got nothing from them. So Nadia Hunter response telling me where to go. So one cool thing you can do with Yachto is you can just, if you can find the binary or unroll it for something, you can just write a very simple recipe to pull it out of an existing git or existing tar ball or git tree or whatever and put it right into your image. So does that sound easy? Anyone buying that it might actually be easy yet? No? Okay. And then verify the bootable image is created. If you have automated testing, you know, that's a good thing and you should be using it. If hopefully everyone believes that at least. Okay. The next big piece is migrating to generic sources. So git rebase is fantastic and awesome and everyone should know about it because it will save you hours and hours and hours. And so essentially to do this using the U-boot recipe I did before, you just clone the upstream U-boot, get in there, do a remote add and this will actually hook the upstream sources with the one that the vendor is supplying into the same git tree. And then of course you need to fetch it. And then I was in this example simply just taking that and pushing that to my local GitHub account. It will make more sense here in a second because then all you need to do is change the recipe to point to your local one. The one in my GitHub not using theirs anymore. And then now is the fun part. It's the rebase. So you, as I was saying before, you do the git rebase and essentially what it will do is it will replay all the commits it has which may or may not be interleaved with the current tag. It will replay them on top. So essentially at this point on the second step, you will have let's say 100 commits they have on top of that tag of V 2016.01. So right now you have essentially an entire history of commits. And then you can use the git rebase I to squash them onto a single commit which is a good thing. And then push it upstream. So as of right now, you should have the exact same code that you had before organized slightly different. And you can build it with the modification from a second ago. And, yeah, no recipe needed to be changed. Unfortunately, in this case, the, you could use a more modern tag like 2019.01. But that requires it actually replaying on top of that. And if anything's changed out from under it, and in three years, many things have changed out from under it, you'll have a fun little rebase problem at it. Every patch that is not going to apply cleanly, it will stop and say, hey, fix this up. Hey, fix this up. And if it has 300 commits, it gets old really quickly. So the kernel recipe that I showed a second ago, to change that, you just need to, the kernel recipe actually has some extra logic in it. So the source rev, if you don't know, is essentially saying, give me the shaw of the commit that you want to check out. And all the rev says, give me the top most. So this way, instead of saying, I want this specific one, you say, just give me the top. And that way, you can always, every time you rebuild it, you'll be building the latest greatest. So if you update your git tree, you'll have the latest greatest every single time. And we'll get to that in a second. Unfortunately, that starts throwing an error of saying, hey, your kernel versions don't match. And you can avoid that sanity check with the second highlighted one, which says, don't do any sanity checks. Okay. And then upstreaming, you actually now want to split it into unique patches. So the reason why you want to do this is if you, so as of right now, you have a humongous squash patch of every driver change, every core change, and anything else all into a single atomic patch. And if you send that to any mailing list, to any maintainer, they will immediately tell you where to go because it's not acceptable to them. And also, if you were trying to change a network driver and a storage driver, you're going to have different maintainers looking at that. And that means even if one's cool with it, another one's not going to be cool with it, and you're going to have to re-spin it every single time. And the network maintainer is going to be tired of looking at it after four times with no changes. So you need to split it up into individual chunks to upstream. So now we're actually getting to why hopefully all of you showed up using Yachto as a way to actually upstream and maintain it. So how do you upstream with Yachto? So there are two easy ways, I think one's actually easier than the other, using a Git tree that's being referenced in the Yachto recipe. So before I showed you, you can just update to your local Git tree in the previous recipe for the kernel, for example. And you can change that in Git hub and not ever modify your recipe. And each time you build, it'll keep updating it. Like it'll keep the latest, greatest every single time. Alternatively, Yachto has this patches directory inside the meta layer that we were talking about earlier. And you could have that full of just 400 patches that you would have to individually keep track of in your recipe. It's not that hard because it's an individual line each, but it's not particularly pretty either. So I think number one is the better way to do it. So using the kernel again as an example, every single time there's a tag that's released. So I think RC5 just came out for 5.3 or whatever. You can simply do a Git rebase on top of that. And it'll replay it actually, you don't need that I. Anyway, it will replay it all on top and any patch that has been accepted upstream will now just silently fall under that. So let's say that you had 20 patches that have yet to be accepted. And then you just get rebase and five of them have been accepted. Without doing anything on your own, those will naturally fall under the tag. And then it will still build and you'll still have this list on top of it. And then you can just use Git send email. Everyone's familiar with Git send email? Anyone not familiar with Git send email? OK, so Git send email is a very cool wrapper and essentially you give it a tag and say, using UBoot again, v2019.01.dothead and you will have every single patch that's there and Git send email will email every single one of those to a specified mailing list or person or whatever. So back in the olden days you could use Git format patch and it would sped out a series of patches and then you could use your email client to send those out individually. This is just a nice wrapper for that. So yeah, so just like I was saying a minute ago, as each patch is pulled in, Git is automatically going to skip over it when you rebase. So each replay is going to have a shrinking and shrinking list. You of course, as you get feedback from the mailing list, they're going to be modifying these, but it's not that ugly. If you want to use the source rev, you're going to have to update that constantly in your recipe. Auto rev is a great way to get around that, but it also has its problems because you're going to be tracking the head and you may not want to do that for various reasons. And if all those are going at the directory, you're going to have to regenerate those every single time and that's going to be problematic. So using Yachto to maintain. So you've already gotten all your patches upstream because it was easy, right? And now you have the maintenance version where you're getting patches from your internal people or you're getting patches from various people upstream that you pull into your local Git repository and then you send those out. So your Git tree is still all you need to really care about. And using the same ways before Git, Git send email, you're going to churn that buffer up and down. And then another thing, wow, I got fast. Hopefully everyone knows about the OE layer index. If you actually create a meta layer, the OE layer index is essentially a website that contains every meta layer ever and adding something to that is trivial. So when you actually do your own personal meta layer, you push it up there and everyone can use it and it's pretty cool. And then that's where you can find the code that I've done. Sorry, I ran through that. So questions, hopefully there's lots and lots and lots of questions because if not, you get to go to the coffee early. Any questions at all? Okay, so those didn't... Is that only for Yachto or is that for all the... Can that be used for everything? Yeah, so for the video feed, the statement was that there is a wrapper inside Yachto to generate the pull request for the Yachto changes, which essentially would be the core changes and maybe meta layer changes. So that's really cool. Other comments, questions, points to ponder. Cool, then that's it. Thanks everybody.