 Good morning. Welcome to the embedded Linux conference. My name is Dexter Travis. I'm the kernel maintainer Linux Yachto BSP developer for my company Precision Planting. I want to talk to you today. What I've kind of got prepared is what I call investing in mainline Linux. And it's kind of the story of our growth and our change from the infancy of our relationship with Linux all the way through where we are today. We're kind of developing and integrating a more mainline, more tip-of-the-spear Linux into our products. Who is this message for? This is kind of my vision of the continuum of Linux developers. And it's from kernel developers who are the folks who are really in the development of the kernel on a daily basis. They're developing new drivers for new pieces of silicon, new features, new file systems. They're not so much focused on a whole system. And on the other side of the spectrum, there are the consumers. These are the people who are using Linux in things like thermostats, but don't even know what Linux is. They just know that they want a smart thermostat. And then in the middle are what I call integrators. Those are the people like myself. These are the people who take a Linux from the developers, from the driver developers, from kernel.org integrated into a useful product that adds value for the consumer and sells it. And so that's where we get products like you see here before us. And so those people, those integrators like myself, that is predominantly the people that I want to talk to today as I share my story of how I grew and how our corporation grew in our integration of Linux. So the way we're going to break this down is part one is system on modules versus chip up designs. And that's kind of hardware focused. We'll spend a little bit of time there. And then part two is what I call the mainline or going mainline. And it's going to be kind of our journey from the vendor Linux and the kernels from other sources versus taking it straight from kernel.org and kind of maturing in the way that we handle Linux. And then finally we'll have a couple of real short takeaways and an opportunity for you all to ask questions. So a system on module is, as you see here in the picture, a device which is a prepackaged circuit board that you buy from a vendor. It's got your processor, your DDR, your memory, usually some accessory chips to power management, maybe a touch driver. It's kind of a ready to go Linux system on a board. And then you integrate that into your final product. On the other hand, a chip up design is where you start from the beginning in a schematic capture and you choose each of those individual components yourself and you glue them together and you build the entire system from the ground up. I want to share a story of what happened to us with the system on module as we evolved and as we grew in this process. What you see on the screen now is actually the previous generation of the product I have here in front of me. And we called that the Gen2, this is the Gen3. And while we were manufacturing this Gen2 product, it had a resistive touchscreen driver. During our season, which is very dominated around the North American spring planting season, we had a situation where we got a new lot of psalms into us and our manufacturer built them up into finished goods. And then when we got to the end of line test, those devices failed the touchscreen test. The manufacturer contacted us and we had to investigate. Meanwhile, our growers, our farmers, our customers are trying to plant corn. And so, as we investigated, we found out that our psalm vendor had changed the touch driver I see without notifying us. And so once we discovered that, we discovered that it was a compatible, pin compatible device, but it wasn't fully software compatible from the standpoint that we needed to change our kernel configuration to enable this new chip. The fix was pretty simple, but the ramifications were significant for us in terms of being in a line down situation at a critical time of year. And that was because the psalm vendor didn't communicate ahead of time this change to their part of the system. And that led us to desire more of a chip up solution where we had control of each individual component and we had better granularity and visibility into those changes before they hit our manufacturing process. So pros and cons, kind of a big list here. I've touched on that flexibility and granularity, the change control and the cost situation. I also want to talk about manufacturing. So with a system on module, you have a set of manufacturing constraints where you typically require a little more human interaction to install the module, to screw it in and put on standoffs, do something to kind of secure it to the system so that it doesn't shake out, especially in our situation, we've got kind of a high shake and high vibration world on a commercial farm. And so on the other end of the spectrum, you've got manufacturing with a chip up design where with modern processors and modern DDR memories and modern EMMCs, you're almost certainly going to have to work with a BGA and some very delicate surface mount techniques. You're almost certainly going to need a manufacturing partner who can handle x-ray technology. And so we partnered with our manufacturers and we actually helped them and worked with them to upgrade their surface mount lines so that they could do this level of manufacturing with us and for us. So that's kind of the trade-offs there. It actually, as a result of that change as we went to the chip up design, this system, which is actually a two-part system, it has a command and control system, and then a visualization system that you see here. The total cost of goods for that is very similar to the original Gen 2 system that I showed you earlier, primarily because we were able to reduce the cost of human interaction and human work to assemble and develop the system during manufacturing time. So this is kind of another way that I like to look at this, what I call product families. So again, yeah, I've talked about our first generation product and it was built around a system on module. We were kind of locked into what that product could do for us and that configuration in terms of processor and RAM and memory. On the other side, when we went to a chip up design, particularly with the displays you see here, this is our 10-inch display. You also see on the screen the 16-inch display that we've developed and an HDMI box that we've developed. All three of those, in addition to the one that kind of isn't a picture, but it's just an image, those four systems, one of which is still in research and development, they're all based around the same product family. They're all based around the same chip up design where we started with the processor and we started with the ER and the EMMCs, but we've flexed that into these different configurations where we can now create four different products. Some of them are based around a dual core processor with graphics. The 16-inch is based around a quad core processor with enhanced graphics and the HDMI box is one that's very low-value. It's just for our dealers and for our training and sales people to kind of do connections to projections so that they can train our customers, that's a very low-value unit, but it shares so much of the same componentry and the same design that we can continue to do that as a chip up system. And then finally, the last unit is no display at all. It's actually focused on connectivity and memory. It's focused on hard drive space. And so all four of those are leveraging the same chip up design that we created in the beginning, even though they've been able to fill different niches in our product line. So kind of to wrap this up, where would I suggest you use a system on module? And that's basically internal designs, ultra low-value systems and development platforms and betas. We have a system where we're actually doing some research and development on some new technologies and we don't know if those technologies have any application in the field for our customers, for our growers. But we would like to examine them and develop them and discover for ourselves whether or not those applications exist. And so we have created a board around a system on module to do that research. It involves FPGAs, it involves cameras, it involves things that we've never internally dealt with before. But by doing a system on module, we can learn and we can grow and we can figure that out and not do a full chip up design until we decide whether or not it has actual application in the field for our farmers. But again, I would say chip up is pretty much everywhere else. The takeaways here are invest in the designers, invest in your schematic people, and invest in your manufacturing partnerships. I want to transition now to going mainline as I call it, or the difference between where we started with vendor kernels and the mainline kernel where we kind of ended up. And so vendor kernels, what they promise you is this guy, right? They promise you kind of super tux, they promise you the armor and the big muscles and just, they promise you that this guy is going to solve all of your problems with Linux. What I find they tend to deliver is old man tux. And old man tux is not the one you want. In our experience, we found that all of those vendor kernels were at least one, often two, maybe even three LTS cycles behind mainline kernel. What you want is tux. You want the Linux foundation tux. You want kernel.org tux. You don't want an adjusted tux or a streamlined tux. You just want the real deal. So as we progressed from that binary or that vendor kernel, all the way up to where we are now using mainline kernels, we kind of developed this system that I like to call the kernel maturity model. And the kernel maturity model goes like this. At KMM level zero, you're basically pulling down a binary kernel from a vendor. That might be a binary BSP where it's a tar ball download and it might be where your system on module is something like a Raspberry Pi or a Beagle bone. And you can just use the whole distro that comes kind of out of the box with those systems. On the other level one are vendor kernels where you're getting get from the vendor and you're kind of a little more advanced. Level two is a release kernel. And this is where you kind of go from one release of kernel.org or Linux foundation, Linux to the next release to the next release. And you're not yet ready to follow master. And then finally, level three and level four are where you start to follow master and then you contribute back to the community. Level zero, as I said, it's a fully canned Linux. It might be a pure binary image. It's a tar ball download, no get required. You tend to get lots of support here from the vendors. And that's because you don't change the Linux, you don't customize it. It's very much their system, their Linux. Like I said, it might be something like a Raspberry Pi or a Beagle bone where you are running the whole operating system as they give it to you. And therefore, the sandbox that you're in is very well intact and the support there is reasonable. It is, however, tied to that system on module or tied to that chip vendor. And software support can be very restricted at this point. KMM level one or kernel maturity model level one is what I like to call the support donut hole, right? So what happened here is as we wanted to get new features from the kernel and one of the ones that drove us was the F2F file system. So we had experienced performance problems on EXT on MMCs and on SD cards. And so to fix that, we went to the F2FS file system. And so part of that process, we found that the releases, the BSPs that we're using in level zero didn't support the latest features of F2FS. So we went to the git from the actual processor repo. So like git.ti.com or git.freescale or git.silinks. And we pulled down the latest kernel from them that hadn't been released into one of their BSPs yet. And what we found was that we were kind of in this no man's land, right? So the kernel community, as we know it as the Linux foundation kernel community was not able to support us because we're running a vendor kernel. The vendors didn't want to support us because we're not necessarily running the kernel that they have outlined for us. We're no longer really in their sandbox as they've defined it. And so this is what a support conversation looked like for me. The excerpt you see here at the bottom you may not be able to read, but this is an actual excerpt from one of my conversations on the TI E2E forums where I had issues with USB. And I was trying to figure out how to get this system to work using a particular USB port on our custom board. And I was using the 4.4 kernel and they were on a 4.1 as terms of their official released kernel. So I said in my question, explain my situation. And the response I got was very much like this. It was, hey, if you try our def config, if you try our official branch, which at the time was 4.1, I was trying to use their unreleased 4.4. But mainline was already on 4.9. So they wanted me to go back to the 4.1 official. And then they wanted me to go back and try it on their EVM. So now we're eliminating all of my hardware, all of my software, all of my configuration. Effectively, they're trying to put me back in their sandbox. And that's fine. And I understand why they do that. But in terms of getting my product out the door, this was not the most helpful response. So level two, this is where I went next. So I decided that getting the kernel from a kernel vendor was not necessarily the most effective option in terms of getting support. And one of the ways that this went down with us is that as we evolved this product right here, we had an additional need for GPU and processing capabilities. And we learned that the mainline kernel actually had open source drivers for this GPU that were ahead of, and in some cases performing better than the binary blob drivers that were available in the vendor BSP. And so we went to mainline. We had an excellent community support interaction here where I had some OpenGL issues where OpenGL wasn't working with QT. And I went to the Etnaviv community. And I got great support. And they helped me migrate from the vendor kernel here to the mainline kernel on a particular release branch. I believe it was 4.14 at that time. And so what happens here is you're going to forgo the vendor Linux software support. So at this point, you're now in mainline and you're getting some community support from Linux people. And you're kind of have to kind of decide to lean only on your silicon vendors for silicon support. And I'll talk more about that later. So this gave me a false sense of stability and longevity because at this point, I was very stuck on Linux releases. I felt like master was for those kernel dev people who were in the kernel every day. I didn't think that kernel master branch was for integrators like me who had to ship a real product. And so I had this process where I went from release branch to release branch. And I want to show you kind of what that merge process looked like as I tried to upgrade from one version of the kernel to the next. So what I'm going to do is I'm going to do a get status just to kind of show you where I'm at. I'm on my kernel 5.4 branch. And then I want to show you my remotes. So I've got a kernel.org remote and then my own internal PP Dev remote. I'm going to fetch the kernel. And then I'm going to fetch the tags. And you'll see that we have a new 5.7 tag here. And so I'm going to go from my 5.4 branch. And I'm going to create a new PP Linux 5.7 branch. And then I'm going to try to merge from what used to be my 5.4 branch to the new 5.7 branch. And this is going to take a few seconds because frankly, this is a big merge. And so what we get here is a big merge mess, basically. And thanks to my friends at Stack Overflow, they gave me this command here, which will give me the number of lines, the number of unique files that failed to merge. And so we end up with 782, no, 702 unmerged files that I will now have to manually decide whether or not to take the 5.4 version or the 5.7 version. As you can see, that's not an effective way to migrate from kernel releases. And in fact, we did this for probably about 18 months to two years through several cycles. And it took me weeks at a time to migrate because I felt like I needed to support the LTS for an entire season and then jump directly to the next LTS. I felt like master was unstable. I felt like master was not secure. I just, I was frankly afraid of master. And so that's how I ended up in this world where I was jumping all the way from a 4.1 to a 4.9, or even a 5.4 to a 5.7. And I spent ridiculous amounts of time trying to complete those merges. So that led me to where I'm at today, which is kernel maturity model level three. And this is where I'm following Linux master and I'm shipping long-term support releases. We most recently shipped the 5.4 and we're going to support that all the way through our next release in 2021. However, I'm now doing one little bit of time. I'm doing merges even on a weekly basis sometimes. The unfortunate thing here is that when I started this position about five years ago, the company sent me to Linux foundation training for Yachto and embedded Linux. And during that time, the trainers pounded it into us that we should follow master. We should follow kernel.org master. We should pull and merge master as frequently as possible and that master, master, master was the place to be. I had to go and spend the last several years learning that myself the hard way. And so this is where I've ended up, which is where I should have been in the first place. And I kind of hope that my experience will help someone out there to shortcut that path and more directly get themselves to a place where they're pulling in master more frequently and running with master. This spring, we had a situation where our customers had acquired USB sticks commercially, which we used to update our system and export files and data. Those USB sticks are now coming onto the market that are formatted in the XFAT file system. We do not support that in our kernel configuration. And so our customers were calling into product support saying, Hey, I've got this USB stick and it doesn't work. And so product support filtered that down to us and we did some, did some investigation, did some learning and we discovered that, yes, those, those fats, those sticks in X, XFAT file systems won't work with our kernel. But we were able to have a conversation where I was able to say, yes, it doesn't work today, but it's coming soon. It's, it's in staging. It's in experimental and it's going to be released from staging experimental very soon into the true driver branches of the kernel. And by the way, if you'd like me to turn it on with our existing kernel, I can do that. We made an executive decision that in the middle of spring, in the middle of our busiest season, we wanted to make as few changes as possible, but we needed to keep an eye on this for the future for the summer when things cooled down and we could make that change. And so a few weeks ago, the kernel 5.7 release came out and the XFAT file system went from staging experimental to a full mainline component of the kernel. That happened on Monday when I came to work. I noticed that when I pulled the kernel. And on Tuesday, I decided to turn on the configuration for XFAT and make a build of our entire system. And at that time, we were running XFAT USB drives in our system within basically 24 hours of when the kernel master branch at 5.7.0 enabled support for them in its main driver system. So I want to demo again, kind of very similar to that previous demo, I want to demo again what what a better merge looks like when you're merging from master maybe once a week or once every other week. So here again, I'm going to do the get status just to show everything's up to date. I've got the same remotes kernel.org and my origin git.ppdev. And my logs will show you here that I'm on 5.7 RC six at this particular moment. And so now I'm going to get fetch the kernel.org remote. And that's going to take a little while because I had not prefetched this time around. So we'll give it a few seconds there. Okay, and we see we've got some new tags here. In particular, I see a 5.7 RC seven. So that's one week later than the tag that I had been on. And I'm just going to merge that tag into my PP master branch. And there we go. It's an automatic merge. Basically, the entire git system was built to do exactly this. And we're leveraging it to do what it was built to do. And it works. 262 commits during that one week cycle. And we're done. It takes just a few minutes every week, every other week. I wouldn't go beyond about every two to three weeks before I start to do this. Primarily the things that I do run into are things like config changes, where the default setting of a config or a new config is available. And sometimes that will break an ethernet situation or something else or USB. But by and large, when I do these polls, I can then immediately put that master kernel directly onto our system. It's actually occasionally been fun. We will have a little friendly race between myself and the developer who sits next to me who runs Arch Linux, which is notoriously fast to get all the updates to Linux. And so I like to make sure that as I pull in kernel updates and run kernel master on our hardware in the field, that I stay just one little step ahead of Arch Linux. And because I'm keeping up to date with master like this, it's actually reasonable to do that. So this is graphically kind of similar to that previous graph that I showed. This is graphically what that small merge or that better merge looks like where I'm just taking little tiny bites of the element every week or every other week. And as kernel branches come out, I can branch with them and I can follow them and I can, I can update and merge and I can even commit new things to those branches. If I know that that's something that's going to live on that branch, I also will occasionally, you can see the green dot there at the bottom that represents maybe a new change or a new feature that I've done on the master branch for a new board or for something else. And what I'll do typically is cherry pick that over to the release branch if I need to. This system has drastically reduced my amount of time spent maintaining the Linux kernel and keeping it up to date. So at this point, I want to talk about community support because once you get to this point where you're on master branch, your primary means of support for the details of the kernel itself are going to come from the community and interacting with that community is a process that I'm still learning and I'm still educating myself on. We had an experience this spring where our system has a sound driver chip and that sound driver chip as a part of our overall system, we have a battery backup so that we can do a graceful shutdown. As a part of that graceful shutdown, some of the voltage rails are backed up and some of them aren't. We discovered this spring that in certain situations during that a voltage glitch where power went away briefly and came back and that battery backup kicked in very briefly, we actually discovered that one of the voltage rails to that sound driver chip was not battery backed up and it should have been. And so as a result, after the power came back, the user experienced no sound. This is not an acceptable situation for us because in a noisy tractor environment, those sounds key in to the user to let them know that something's wrong with the system or wrong with their planner behind them. And so as we dug into this, this is where I started to kind of create that division of labor on support. My first line of support was to go talk to Texas Instruments who created the sound driver chip and I was able to talk to them about the voltage rails and about the fact that this voltage rail was present and the other one was coming on going. And they were able to kind of tell us, yeah, that's not recommended. We really wish you wouldn't do that and you need to in future designs, make sure that all of those rails are together, either backed up or not. But they were also able to tell me one very important, actually two very important pieces of information. First, we weren't going to damage the chip. So even though we were not doing a recommended thing, the chip itself was going to survive and be a reliable part of our product family. Two, they were able to tell me that if I could do a full software reset of the chip, I could potentially recover sound. And so that led me back to the Linux kernel and it led me to wonder, how do I get the system when a power glitch happens to reset the sound card? I eventually landed on the idea that I could use a GPIO which monitors that mainline power and by monitoring that GPIO, I could then unregister and re-register the sound card with the kernel sound subsystem. So I went to the kernel mailing lists and I told them what I wanted to do and I told them how I wanted to do it and I told them kind of my big picture solution. And I asked them, is this the right solution? Is there any other solutions that are out there that I should have examined? What do you guys think of this solution that I'm proposing for my problem? And what I got was silence, was nothing. Nobody responded. And I kind of have realized since then that the way that the Linux kernel community will respond if you're not asking the right questions is by not answering the wrong questions. And so silence is kind of the way that you get that feedback that says, hey, you know what, you're not asking something we can help you with. In my case, what I had done was kind of a rookie mistake that I should have known better. And that was, I made my question about me, about my problem, about my product, about my solution, my question to the forums, to the community, to the mailing list was all about my stuff. And so as I dug in and as I started to implement my solution, I got a little deeper into it and I was able to use that GPIO. I was able to create a work function that would monitor the GPIO and register and unregister the sound card when the GPIO came and went, when the power came and went. That was working pretty well. And so we discovered that in certain cases when a sound file was being played during a power glitch, the sound would never come back still. And I was able to dig through the kernel about five, six layers deep into the sound subsystem. And when I did that, I found that there's a list in the sound subsystem when I unregister the sound card. And that list is all of the open files that the sound subsystem is waiting to be closed, so that it can successfully unregister the sound card. Now I go back to the kernel mailing list and I say, Hey, I'm in this function. And if there's a file in this list of files that need closed, at that point, I don't unregister the sound card successfully. And I get kind of hung up in this function. But if that list is empty, and I can successfully close all those files, then I successfully unregister my sound card and then, likewise, successfully register it when power comes back. Immediately, within a few hours, I got a response from the community and it said, Yes, that function with that list of files that are waiting to be closed, that is exactly the way that function is supposed to work. I said, Well, that's great. But how do I get the files to close? Why are they hung open? Why aren't they closing? And I got another response very quickly. And it said, basically that the kernel is waiting on user space to close the files because it's done playing them. Okay. And I said, Well, user space doesn't know something went wrong. User space doesn't know it's supposed to close the file. What do I do? Another response back from the mailing list. And it said, Here's an underrun overrun function. Here's an X run function that you can call when you detect the GPIO power glitch happens. You can artificially make the user space and make the rest of the kernel stack think that an error occurred, which triggers an error all the way up the call stack back to user space. And now my user space application knows that something went wrong. And it knows that it needs to close the file. And it does so. And as soon as it closes the file, we successfully unregister the sound card. And then later, when power comes back, I can re-register that sound card and re-initialize it and get sound back. So bottom line here, when I made my kernel support question about my problem, my solution, my product, when I made it about me, I got silence for an answer. But when I dug in deeper and I made it about the kernel, then I got answers and then I got solutions and then I got help. So vendors, Silicon people, TI in this case, supported their product, but they didn't support Linux. Linux people didn't support my product or my solution, but they did support Linux. And so finding the right people to do the right facets of support as you need them, that's what I found was critical here. Finally, KMM or kernel maturity model level four. This is very similar to level three, with the added situation where as a level four, you're going to give back to the community. You're going to either through source code or through community support on forums, on mailing lists, or even through giving discussions at conferences like this one. It's our job kind of as we mature in our relationship with Linux that we give back to the open source community. And so that's what level four is kind of all about. And that might even be contracting a mainline development house to do some open source development for you to make a better driver. That's very much what was happening in that etnaviv conversation that I talked about earlier where the person who helped me was an integrator like myself, but he was hiring a contract house to help him develop open source drivers. He was giving back to me and he was giving back to the community through his work. So I want to take a moment now as we kind of close this down and wrap up what we've been talking about. I want to share with you some of the information I've learned and some of the communications I've had with my boss about how to convince your boss that going mainline is actually worth the investment. First of all, I'd like to say total cost of ownership. And what do I mean by that? For me, my job role is primarily our kernel and Yachto BSP maintainer. What we found as we've transitioned more into going mainline kernel, more into keeping our software up to date so that we do it in small pieces a little bit at a time versus doing it in great big chunks. What has changed in my job is that I now spend just a little bit of time every week doing the maintainer kernel role and that frees me up throughout the year to fill in other roles as needed throughout the corporation. Whereas previously we had a very large chunk of the year where I spent doing nothing but weeks and weeks of effort trying to go through that whole Herculean effort of maintaining and updating from one release to the next. So for us, it's actually freed me up to do more and different things and diversified my job situation. The other thing I'd like to say is that software inertia is real. And so by that I mean we have found with the kernel and with other software and application projects, we found that really large projects when you no longer maintain them and when you let them get a little bit stale, we find that when you do need later to go update them, the effort required to get them up to date becomes just too much to bear. It becomes more than a small corporation or even a medium corporation can handle because the expertise is gone because it's you're only as good as a software developer as the code you wrote last week. I remember when I was very young, when I first learned how to write software, they told me that when you write code, you know it and you understand it. And then six months later, you go back and you try to debug that same code. It may as well be code written by somebody else entirely. And so that's what I mean about software inertia. I mean that if you don't keep things up to date and keep things rolling forward, then when you do finally decide to roll them forward, they'll just have too much mass and too much stale, stagnant inertia that you won't ever get them going again. And then when the unforeseen does happen, when the touch driver changes, when you get into that situation like we had with XFAT, where you can see or when a customer comes to you with some new thing and a new feature that they want, you'll be in a position where you can make the choice. You can have that conversation in the middle of the season like we did with XFAT. Now we chose to delay it, but the point is we had the choice. Had we been running an older kernel, as we had done previous years, as we had done in our history, then there would have never been any choice in that moment. We would have been forced to wait until the next kernel cycle when we update during the off season. We couldn't have even had that conversation about whether or not to make the change to XFAT during the high season. And I guess I'd also like to say, choose your own adventure, control your destiny. The more that you get into mainline and the more that you get into chip up design, the more that you as a corporation, you as an individual, will be able to make all those decisions. And when it comes to support, the vendors, the system on module people and the silicon people, they do a really good job of selling and supporting their product. Their product is the system on module. Their product is the silicon. So lean on them, work with them to support that silicon very much the way that I did with TI and the sound driver, right? So they did a good job of helping me understand their part, their chip and how it worked and how it interacted with the voltages on our board. On the other hand, the Linux community, the Linux foundation sells, if you will, Linux, right? It's their job. It's their desire. It's their purpose to sell and support the Linux kernel and all of the other things that go with that. So let the community support the software, the kernel and use those two entities as two sides of the same coin. Use them in their own best capacities so that you get the best of both of them. Now, to kind of go back to our, our SOM and ship up conversation from a few minutes ago, I'd like to say basically use system on module selectively, target them towards specific places where there's either a very low value or maybe it's something that you don't plan to support for a very long time or maybe it's an internal research and development project. But by and large, I encourage you, even with limited resources, take control of your designs, do the chip up, do the work with your manufacturing partners, do the work with your schematic partners so that you can migrate towards a more integrated system. And with going mainline, I've talked a lot about kernel.org and the master branch and following mainline kernel. But I'd also like to touch on Uboot and Yachto open embedded projects. With Uboot, I had an experience this past spring where I had allowed my Uboot to stagnate. I was running approximately a 2018 version of Uboot. And when I tried to update that to the 2020 versions of Uboot, I found that during that time, Uboot had gone through a significant transition to a more kernel like a driver model. And when they did that, my board files and all of my work from before had become stale and stagnant. And therefore, I had to spend a lot of effort to get that inertia overcome so that I could get that ball rolling again, so that I could get Uboot back up to date with master. And so I've done a better job now of bringing all of my Uboots for all of our platforms and portfolio products back in line with master. And I will do a better job in the future to keep them up to date with master. Finally, with Yachto and open embedded, this is kind of the one exception. I would say that for those projects, because I at least am very much a consumer of those projects, I don't modify the source code for those projects. I basically just consume them and use them as a tool to build my distribution, to bring my Uboot, to bring my kernel together. Because of that, because I'm not actively interacting with them, I find that it's best for me to release bounce those, to go from Thud, to Zeus, to Dunfell. And by doing that, I've found that I keep all of the different pieces and packages in sync and in parallel with each other. A few times, I've tried to go to master on Yachto, and what I found was that there were just so many different moving parts that it didn't always work out the way I thought it should, and it didn't always create a unified build the way I thought it should. Finally, I'd like to say thank you. I appreciate your time. I appreciate you coming. And I appreciate you having patience and working with us as we created this virtual Linux conference in these in these unusual times. And now we've got a few minutes, if you'd like to kind of type, I'll do my best to answer your questions. However, I would like to say that with many of these talks, it's very possible that those of you out there in the audience, you may have more experience and better answers to those questions than I do. And I welcome that. And I just want to learn from you as much as you learn from me. And I appreciate it again. And I want to say thank you for all that your time and for your attention. All right, this is Dexter. I thank you, everyone, for everything. We've got a few more minutes here. Let's take about 10 minutes. If you'd like to type in some more questions, I can try to answer those live now. So some of the questions I've gotten already most recently, how do you successfully merge local changes when pulling master? I find that by and large, we don't have a lot of merge conflicts with our local changes. It's kind of why we keep the internal repo to kind of manage that as well. But for the most part, now that I've kind of kept track with master and I'm doing it in smaller and smaller pieces, I find that those merges don't generally conflict. And when they do, it's typically an API change in the kernel or something like that, that I have to kind of manually tweak whatever, just to keep it up to date with the latest as far as what the kernel is doing. Another question we have was, you know, do we merge the vendor kernels back into mainline? And no, we have not. That's kind of been primarily left to the vendors as far as what they want to do with that. We do not actually pull from vendor kernels anymore on the stuff we ship. We're primarily running NXP free scale and TI processors on the devices we ship. And those are very much supported by mainline. And we really do basically just pull straight from mainline and not do a pull from the vendor kernels for those processors anymore. So the next question here is let's see, do we have an SOC running multiple architectures? So at this point, we don't ship anything that's running multiple architectures. We are doing some research and development on some architecture like that where they have kind of something like an ARM Cortex R5 or Cortex R kind of running a free RTOS system, as well as Linux running side by side. We're still very early into the research and development phase on that. So that's where we are with it. I'm anxious to see how that goes. I've done so much as to do a Blinky Lite with free RTOS that gets loaded by the firmware sub-system of Linux onto those separate architectures, but that's about as far as I've gotten. Another good question about synchronizing the device trees between Uboot and the kernel. I would really like to get that done and I feel like that's kind of where it's going. But no, I have not been able to do that. We actually, as I said, my Uboot got kind of stale back in 2018, and I had to kind of fast forward it to the 2020 version. That's when I kind of learned about all this device tree stuff. I hadn't been using device trees at all in Uboot until just recently, and so I'm probably behind the curve in terms of device tree on Uboot. So asking about our hardware team and whether or not they're specifically going for open source friendly hardware. It's not necessarily something that we're tied to. It is definitely very helpful when it happens. We have at least one or two products out there that are kind of based on the Beagle family of devices, and that was helpful, especially in that first round of getting things up and running and getting those first few chip-up designs running if we could leverage some of those open source hardware platforms that was useful. I've got another question about the latency of the chip vendors, for example, NXP and mainlining support for the new chips. Interesting story about that. I actually had a phone call within the last few weeks with NXP. We currently use the IMEX-8. We were kind of looking at, you know, is there a use case, and they were kind of pitching the IMEX, sorry, we used the IMEX-6, and they wanted to pitch to us and kind of give us an update on the IMEX-8 family. And one of my questions was, you know, what does the mainline, what does the upstream Linux kernel support look like for the IMEX-8, and specifically the graphics, because we had one of the main transitions we did was the IMEX-6 from the proprietary graphics to the EtnaVib graphics. And so as we had this phone call with NXP about their IMEX-8 and how well it was supported in mainline kernel, their answer was, well, we don't know, we don't really think it's supported, we think you might have to fall back to our vendor kernel, basically. And I said, okay, that's fine. And then I went and did my own research, and I looked at the mainline kernel, and I looked at the commits, and I looked at, like, Pheronix does a really good job of kind of keeping things up to date and kind of giving a synopsis of all the different changes that happen in the kernel with each new release. And what I found was that the IMEX-8 has been very well supported in the mainstream kernel for quite some time. And so even though NXP themselves wasn't really willing to commit to whether or not it was mainline available, it was, and it had been. So the next one, you know, do I spend time going through release notes and such? Yeah, I tend to, Pheronix, the one I just mentioned, is one that I kind of keep track of. They do a pretty good job of kind of giving me a list of what's new in the latest kernel. And that kind of helps me know, oh, there's, like I said, graphics drivers, EtnaVib, those things have been updated, or there's a new file system or, you know, keeping track of the XPAT stuff. I don't do as much studying the changes in merge conflicts. I rely on some of those third-party sources to kind of sort through what does or does not change on a release-by-release basis. So this is a fair question. It's about basically, what do you do when some of the peripherals for a device are supported in mainline, but others are kind of missing or not? That's a good question. I don't, honestly, I don't have a great answer for it. We're kind of in that position with one of the internal projects we're working on. It's kind of doing some research on a Xilinx project, and they're one where they have some of their FPGA blocks are very well supported in the kernel, but they don't upstream those blocks. And so, honestly, I'm in the same boat, and I don't know what we're going to do as a final answer. If anybody else out there has some suggestions or thoughts on that, I'm open. I lean towards mainline, and I lean towards, you know, pressuring, and even part of the whole conversation here, and the reason I wanted to do this talk, is actually to encourage the silicon vendors to get things as upstream mainline as possible, because I think that they then themselves kind of leverage the Linux community for support, where we can go and support each other versus having to go to the vendor to get support for a piece of the Linux kernel, which is unique to that vendor. So, unfortunately, you could pull from both and merge both and kind of use something like a combination of the vendor master plus the kernel.org master. I don't know what that would look like from a long-term support perspective. And with that, I think we're kind of running out of time. I appreciate everybody coming in. I appreciate your time. I appreciate your questions. And there is a Slack channel. I think we sent out that message here. There is embedded Linux number two track embedded Linux on the Slack workspace. I'll be there for the next few minutes if you guys want to continue this conversation there. That would be great.