 So my name is Tom Rini. I am the head custodian of the UBOOT project, and I am here to kind of give a state of the project for talk. So I think the first thing I want to kind of go over is in order to talk about where we are now, we should talk about where we've come from. So a history in brief. And the project started out in approximately 1998 with Magnus Dam creating PPC boot, and that supported the Motorola 8XX line, and a little while later this was forked into ARM boot by someone whose name unfortunately escapes me, to support ARM processors of the time, and in November 2002 it was merged into UBOOT. At this time we added support for our third architecture, which surprisingly to me when I was doing the research on this again was, oh that's right we added x86 back then. Wow, I would have thought it would have been MIPS or something like that. Now UBOOT's third architecture, which is interesting enough, was x86. It was an SC520AMD part. It was an SOC. So actual real working on the hardware type support. Cool. So in that time we added support for over ten other additional architectures, and from the shortly after Magnus created it to up until 2012 Wolfgang Denk was the head custodian and lead for the project, and oversaw a lot of good work that happened at the time. So in September 2012 Wolfgang stepped down and I have taken over as the head custodian since then, and when I was doing this slide one of the kind of interesting things that hit me was, wow this year will be five years that I've been the head of the project. So kind of for this set of slides my state of the project is, well here are some of the things that over the last five years have changed. Some of these things will be projects that we've started and completed, some of these will be things that we've just now started on, and some of these will be things that have been in progress for a while. But I think it's important to go over this kind of history because lots of people will have their first or second or even third experiences with UBOOT, not being whatever it is today, but they say might have picked up a MIPS router, and that has unfortunately happened to be shipping with an extremely old version of UBOOT, or they picked up a reference platform from whatever silicon vendor they're working with, and that has a kind of old UBOOT, not that old, not that great, might be two years old, might be five years old, sometimes it might even be worse. Especially if, for example, you're picking up an existing internal commercial project that you've been tasked with updating a little bit, and when the project was started they picked up whatever the vendor gave them, and you might be on something that is approaching ten years old as your first experience with what UBOOT looks like. And a lot has changed since then, so I'm going to try and cover a lot of the positives that I want to emphasize that we've evolved through over the years. There's more information on our history over on Wikipedia, and I would encourage people who are so inclined to give it a read and update because there's also some someone out of date information there. So now I want to talk a little bit about the state of our community, and the first point I want to make is that over the last year when we've had a bimonthly release cycle, which means that we do a release kind of around the second Monday of the month, open up a merge window until about three weeks later, then every other week we do an RC, so we get three RCs, then we release, then we go again. And for the last year, every release has had more than 20 different companies and more than 110 individual developers contributing every time. I think that's a pretty good metric for where we are today, and I hope to see both of these numbers grow, but these to me are pretty good numbers. We've also had a number of talks about UBOOT, highlighting different features, different aspects, different things that we've done more recently at a number of different industry conferences. And I don't just mean to places like this, which are really fun and I'm glad to be doing a presentation here, but for example, Simon Glass was giving a presentation on UBOOT at ArmTechCon talking about UBOOT to an entirely different audience that normally wouldn't be hearing about us, and that's a very good thing. So, one of the other aspects of community to us is that we leverage code from a number of other places. We leverage a number of things from the Linux kernel, and when possible, if we find a bug, we push it back up and we fix it. If we introduce a new useful feature, we push it up there as well. Kind of my personal favorite one here is that in UBOOT, we pushed up the feature to the kernel of when you want to build in a separate object directory, you no longer have to make that directory first. You can just do make O equals wherever you want your objects to go, and the kernel will go, oh, that doesn't exist. Let me make it for you now and not go, that doesn't exist. I'm going to stop and complain. Sorry, I went back, yes. So, when we're talking about community, there's a couple other things I do want to talk about here. I'm not just referring to the community of UBOOT itself, but the community of people who work on bootloaders. I am very glad that the BearBox project exists and is doing good and awesome work. For example, you might have seen Linux.com promoting a talk from ELC Europe about doing secured verified boot with BearBox. Looking at some of the details, hey, cool, they leveraged our verified boot stuff to do it there. I am really glad that they did that because that shows that our idea was good, and it's also a good portable idea that other people can leverage. Competition makes us all do better work. By that same token, BearBox came up with the idea first of, well, let's have the ability to build our code and run it as a host application so that we can test things and do useful stuff like that. Hey, that's a good idea. We have that now. Thank you, BearBox. Similar things can be said about core boot and even TianoCore. It is good that there is a diverse offering of bootloaders available today because, again, as I was saying, I believe that the competition, or maybe not the competition is the wrong word, but the fact that multiple projects exist and can show off their ideas, not just talk about their ideas, leads to better implementations and better outcomes for everyone. So I want to talk real quickly about architecture and system-on-chip support we have today. Taking first a quick look at 32-bit ARM in no particular order and a non-exhaustive list. We have support for things ranging from the Atmel, the At91 stuff, but also their ARM V7-based stuff. We have support for RockChip. We have support for a large number of Texas Instruments platforms. We have support for NXPs, I.MX line. We have support for lots of other things I've listed there. We have support for even more platforms that I didn't want to list here simply because it would have taken up the entire slide. I do want to mention just at the end there we have support for STM32 because we now have good and reasonable support for ARM V7-M for the cases where it does make sense to say, hey, I want to put UBoot on here so that I can do some other smarts elsewhere within my system and go from that place. So even when you do have the case of, well, I'm running on an ARM V7-M, we are an option now depending on your system design, and that's important. Let's talk about 64-bit ARM real quick. Again, an inexhaustive list here is that we support the NXP layerscape family. We have support for Allwinner and Xilinx. We have support for Unifier, Integra, and Marvell. We have support for a large number of the various 96 boards families, and again, I am very happy to see this. I believe this once again shows off that we have a good and reasonable architecture and design for how we want to go about sharing all of our kernel between our platforms and our architectures and so forth. We have support for MIPS. If you've been doing MIPS stuff for a long time and heard about it at some point in the past, you've probably heard of the multi-platform. This is the historic reference platform. We've had good support for that for a long time, but we also have good support now for the Boston platform, which is the modern reference design. Going back to X86, which, while it was our third architecture, part of the reason it surprised me was that when I picked up the project, support I had been languishing for a while, these days, that's once again changed, and X86 is a very well-supported platform. We can run in 32 or 64-bit mode. We support a large number of the recent releases from Intel. On my laptop over here, which is unfortunately, not what I'm running the presentation off of, the only reason I'm not running UBoot on it right now is I unfortunately made the mistake of making it my primary laptop before I took the time to fully customize it and risk having to reinstall it a few times, but we can be run on modern Chromebooks even. That's kind of cool, I think. Once again, this is a very inexhaustive list of all the various architectures we support. There's things like ARC and NDS32 and Blackfin, and of course, PowerPC. We still have good support for PowerPC these days. I now want to talk briefly about some of the various important features that we've added, again over the last several years. So SPL, or secondary program loader, is a way to build UBoot so that it will have a much reduced footprint, and the primary goal of this is to load a full UBoot. This gets used when you're booting from something like EMMC or NAND or what have you when the ROM is not able to easily read an arbitrary amount of code and execute it directly, but instead has to read a much smaller amount of code, typically put it into some form of already initialized RAM, such as SRAM or IRAM, or what have you depending on your system design, and execute it from there. One of the things that we've done in SPL is added the ability to say, okay, well, in addition to just going on to a full UBoot, why don't we just go directly to Linux? If we have the ability to read up UBoot, we probably have the ability to read a device tree blob and a kernel, and why don't we just get out of the way all that much quicker, save a couple seconds on boot. This is also further enhanced with some other things that I'll talk about shortly, like right now. So we have good... Really wish I had presentation mode, sorry. So cryptographic image support. As I was saying, one of the features we've had for a while is the ability to do cryptographic signing of the payloads. And this includes both support for things such as the proprietary methods that you might have if you're booting, say, a TI high-security device, wherein there are certain functions within the ROM that you can use to have your payload be authenticated. We can leverage that to verify our next steps and go from there so that you do have a continuously valid chain of trust. Or we have the ability to support you using your own keys so that you know you trust the system because it's signed with your keys. Generic distribution boot support. This has had a number of topics or a number of talks at various times, and I am very happy that the work that went into this was done, and it was indeed not an easy task, but I am, again, very grateful that the community spent the time to make this work and work well for everyone that wanted to opt in on this. What this means is that when you're building a board, if you select a couple of options in Kconfig, you will get a set of boot scripts that will run through and do their best to look in every place that your board supports in terms of storage, or over the network and what have you, where an off-the-shelf distribution, be it Debian or SUSE, or Fedora, or any sort of custom thing you built that happens to, again, put things into these well-defined and documented locations, will go ahead and boot off of. The most interesting thing to me, or of late, at least, is that FreeBSD wants to leverage this same exact infrastructure to say that, well, on our ARM systems where we don't have yet the ability to say that we're going to use an EFI application to go, if we throw our loader into these known locations, all right, now you can just run FreeBSD on your board. That's cool to me. Finally, I'm going to mention very briefly because there is a talk about this tomorrow, EFI application support. The very, very short version of this is to say that now you can go ahead and take, say, Grub built as an EFI application for ARM and boot it. And you can, therefore, use that as your general boot loader, bringing a much more familiar environment to large sets of end users of, oh, this works just like my other machines do, or here's my usual Grub prompt. When I get into Linux, here are the files I need to modify. This is just like all my other machines. I don't need to learn new tasks to get on with the parts that I want to do with my system that are the reason I have this particular board in front of me. We're lowering the barrier to entry to being able to do what one wants to do with their board rather than having to learn new tasks just to get to the point of working on that particular project that you want to work on. So another thing I want to talk about for a little bit is our testing and CI or continuous integration efforts. I'm going to talk a little bit about travis.ci.org. I'm going to talk a little bit about test.py. I'm going to talk a little bit about tbot. I'm going to talk real briefly about poverty. And then I'm going to talk just a little bit about various board farms. So Travis CI, if you've not heard of them, they are a cloud-based solution to offer you various build environments for running your tests in an automated fashion. And like many other projects, they provide community versions of their same services so that if you happen to be an open-source project, you can say, well, I'd like to do my stuff over here. And I'll say, great, here you go. Here's a couple of restrictions to what you're able to do. But go at it, have fun. And we have. So with Travis CI, we are currently able to build 97% of all of the boards in U-boot. That's another way of saying we're able to build just about 1200 different configurations of U-boot in Travis. In addition to building all of those different configurations, we kick off 10 different QMU-based instances of running our test.py framework. And we also run it on Sandbox. We also run a couple of other different tests such as a SLOC and a couple other different things. But what, to me, the best thing about Travis CI is that anyone can leverage this support. All you need to do is if you have a GitHub, you just need to go into Travis CI, accept the various permissions, and then anytime you happen to push to your own personal GitHub repository of U-boot, Travis CI will see that within a few seconds and say, okay, let me kick off the build. And a couple hours later, you will get results. It's not something you want to do with every single time you make a change, but before you run off for the day or you head off to lunch or just before you happen to push a large series out, you can run this and be much more certain that you haven't accidentally introduced a regression on some other platform that you hadn't thought would even be applicable. So test.py is the next thing I want to talk about. This framework is based on the Pi test framework. So there's lots of additional public documentation to make, understand how the tests are written, how one would write their own tests. Test.py works on both real hardware and in QMU, as I was saying with Travis, this is important because we have tests that are written both to be target local, which is to say that they will execute on the board and make sure that, say, the various shell commands work as we expect them to, but it can also work to talk between a target and a particular host so that you can make sure that, for example, DFU works in all the cases that it has worked in historically or that your networking is still working as expected because you're able to load an image over the network and verify the checksum on it. And to some extent, we're even able to enable and replicate some of those tests inside of Travis because we can, for example, say, well, we're going to fire up local networking with QMU, therefore we're going to do all the network-based tests every time, as well as... I'll get the board farmed shortly. Kind of the final thing that I wanted to mention on this slide is that we have a script called testfstest.sh, which will go ahead and run a... I'll say, like, 30 or so different tests, but it will run them on FAT, it will run them on X2, it will run them on X3, and it will run them on X4. And these are checking for various regressions that we've had in the past to make sure that we don't accidentally introduce them again, to make sure that, again, when we're writing our files to a file system, everything is good and consistent and works as expected. We are in the process of moving these tests over to test.py, and it is likely that while the current tests are limited to working only on Sandbox, we'll have that initial limitation when we move these tests over to test.py, and from that point we'll figure out how do we enable these tests to be run on real hardware as well, because that's where you can start to run into real potential complications. For example, if you wanted to send a file up over DFU and have it be written to an X4 file system, you want to be sure that the particular chipset you happen to be using for your MMC or that you happen to be using for USB, nothing went bad over in there with a recent change and has corrupted your image. So, Teabot. I didn't get the animations right, sorry. So, quote from the readme for a minute, Teabot is a tool for executing tests on boards. To me, this kind of puts it somewhere between the functionality one would get with something like Jenkins and what someone would get when using test.py. Heiko Schocher, who is the main author of the tool, has a very good video up on YouTube that goes over how you would use it to run a bisect on real hardware to find a real problem. Now, the reason I wanted to mention this tool right now is that kind of on average he has found maybe like one or two regressions every release with his tools. So, this is a very important tool and it allows us additional ways to get at different hardware depending on how you have your board farm set up. So, Coverti. It's quite likely you've heard of this because it is well utilized in Linux and in other projects. They are also a number, one of the number of commercial tools that will offer free instances to open source projects with small limitations to it and we have a community instance under Dasiwut. The limitation that being a community project gives us is we are only allowed what they call one configure, we are only allowed one of what they call a configuration. So, we are only able to build today for Sandbox, which is good and bad in that it means that we are unable to get as much coverage as we would like on every set of architecture codes, but we are able to build a large number of platform specific drivers against Sandbox even though they will not at all function but since we are just looking for build time coverage and that's certainly fine. So, in terms of what Coverti has gotten us, I try and run it at least every single time I make an RC and over the last year they have found and 45 different defects which I have passed along to people that introduced them and said, hey, can you please go ahead and fix this problem? Okay, sure, here we go and our code quality improves. Most of these bugs have been in our tooling and these are still important bugs to get fixed because this is the kind of example code that someone might pick up and look for for how would they want to write a tool to do something else and we want to make sure that we are providing good and correct example code in these cases. There is also one case here where in Coverti decided that our version of a K-Build tool had a bug and okay, I wonder what the CID is for this over in Linux. Liked over in the Linux project, it wasn't reported there for whatever reason no one could ever figure out why but found the problem, fixed the problem, pushed the fix up to Linux we are contributing back over to the community because we are all just one big open source community here it's important to work together. We shouldn't laugh about this. It's funny because it's true. One other thing I do want to mention about Coverti is that there are a number of other vendors who do have commercial instances of this tool and do run it on UBoot and they have also found and fixed and sent the fixes up to us various problems over the last several years really and this is good and I am happy that companies are seeing this as a worthwhile thing to do now it's time to talk ever so briefly about board farms first of all Danx has their board farm which is administered via T-bot and as I was saying does find and report back and get fixed various regressions and this is good. Various private companies that I don't want to name but I do believe there are some representatives of within the room have their own board farms up and running and they run test.py they run their own tests and this is good and helps us to avoid regressions on various platforms that I don't have or that other people don't have or people don't generally have easy access to. Finally there's my board farm I know my cables are a slight mess it would be worse if you could see what's underneath that shelf that shelf is kind of one of the first little tidbits that I want to pass back to everyone else I'm using simple garage shelving which means that my shelf is MDF and when it's time to swap out the boards I can just toss that out go down to the big box store buy a new sheet of MDF and drill the holes where I need them to be for what I have. The shelving doesn't change just the individual shelf and so far I'm actually pretty happy with that there's a couple of the things I do want to point out in here so first, a number of the boards over there are using FlashAir Wi-Fi enabled SD cards if you have a board that can boot from a file system it is possible from the command line to upload a new binary to it power cycle the board and have your tests run on your new binary you don't have to rely on well if it breaks I need to pull it out possibly run JTAG possibly just have to do the unfortunate SD card shuffle if it breaks as long as the board is able to power on and the board is going to be able to power on after a couple seconds the SD card will show up on your network over a new working version again and go from there you have unbricked your board remotely there's some configuration details to there and I've got a article up on that so Jason was just saying they don't have a micro version and it's true that they unfortunately only come in full size but there are a good number of easily purchasable adapter boards so a large number of my boards there actually have the full size SD card stuck into an adapter stuck into the micro SD slot because it's what I need to do I can't change the hardware and I really want this functionality because it means that whenever I do a push like for example later tonight I've gotten a pull request I'm going to grab it I'm going to push it out to where I do tests Travis CI is going to fire off it's going to run it's two hour run build two hour long build cycle it's going to run QM you want a bunch of things my board farm is going to trigger and it's also going to run on all of those pieces of real hardware without me doing anything so the next thing is the Yepkit Y-Cush what this is is a USB hub where the ability to control power to all of the other ports in software isn't just a accidental feature that works it is a supported feature of the product since basically forever USB as a specification has allowed for the ability to say that while you can remotely control power ports but for the longest time it was more luck than design if you picked up a USB hub where this functionality works that works extremely reliably and is what I consider a must have for certain boards I have that if you have USB plugged into the gadget port in addition to providing power however it normally accepts it just enough power will leak in from the USB side such that the board will never fully reset and therefore the board gets kind of wedged and you either have to take it out of your lab or you get something like this where you can remotely turn off that USB port there's no power leaking in that way you can turn off the power everywhere else the board is really off then you bring it all back up and you have successfully reset your board and you worked around a limitation of the actual hardware you have that you may wish wasn't there but there it is and we have to live in the reality based community so the question is if it only comes in a 4 port version and there's a 4 and I want to say 6 port version available there how many ports is at the maximum you're shaking your head yeah I think yeah I thought that allergic I would the short answer is to check the website because I would 3 I must have been looking at something else I apologize and even at 3 ports it is still very useful to have yes yes so the so the comment is that there are additional USB hubs that will provide this functionality however they do have a much larger price tag to them the very nice thing about this is that I believe it was 30 euros and another 7 for shipping to my house in North Carolina it only took about a week to get here and it works perfectly I could have gotten a larger one for a lot more but this does exactly what I need to do the final thing I want to point out is up in the upper left corner you might have spotted a relay controller these are used for a couple of different things in modern board farms what I'm using it for is simply the ability to toggle the reset line on certain boards that I have however a much better application of it that might get to at some point with my own lab is that I can utilize this and an off the shelf ATX power supply to provide the power to your boards in a large number of cases so the comment is that the link I provide is to the 8 port relay that I have that same company makes 16 and 24 16 and 24 port versions as well yeah yeah so the comment is about digital loggers and one thing that I don't have on my list here is that my power control is using two separate digital loggers power switches and whenever I need to turn one of the boards off or on I have a script we'll call curl with the correct username, password, port and what I want it to do and we'll turn the boards off and on remotely the kind of downside to this is that it can start to get both expensive and kind of cluttery depending on just how many boards you want to have in your board farm which is why people have moved on to using relays and ATX control power supplies so the comment is that you can also use a relay to simulate button presses if you have the things wired up correctly I don't have that as a test in my setup but I could see various uses for that depending on how your hardware is and what one needs to do in order to put it into the correct mode to toggle flashing so now we're going to move on to various parts of U-boot that have changed over the last several years the first thing is that the venerable make all script that you might have seen if you tried to build an older version of U-boot and you wanted to build more than one target has been retired in July of this past year we've finally said our replacement is good the replacement here is called Buildman and it was introduced back in April of 2013 and after about three years we said okay all of the use cases that one might have for make all we can support those in this new tool and we can do a lot of other cool things as well Buildman is a lot more flexible this is in part because instead of working off of the environment because it's a shell script we can just read in a knit file and we can configure it to say that here where I want you to look for your various tool chains so this means that we can do in a single command build multiple architectures or for example if you wanted to say build everything from free scale you can just say Buildman free scale and that will build all the power PC stuff that will build all the 32-bit arm stuff that will build all the 64-bit arm stuff without you having to figure out here are the different environment variables I need to set to have the correct tool chains found further we can even define what it is we want to build in a single command using regular expressions in order to support Travis CI and their limitation of any job in your free instance must be done in about 50 minutes we do things like say alright I want to build everything that is an IMX board but is not from free scale because I have a different job that will cover building free scale and IMX so that I can split this large number of boards down into something that will be done in just under 50 minutes another really handy thing is that Buildman will let us do build for two different revisions or if you wanted it to every revision in a given series and tell you for each of those commits for every board that you've told it to build did the binary size change at all if it did can you tell me what functions grew or shrunk in size between those builds we do care about the size of our binaries so this is an important feature and it also allows doing other forms of regression testing a lot easier for example and I'll talk about when we're doing K config migrations which I'll talk about a little bit we can go ahead and say build everything before we do the migration build everything afterwards and there should be absolutely no size differences because we've just moved an option from a header file to a K config file but it still should have the same value as before all the same code as before should be built so the binary should be exactly the same binman this is a relatively new tool that we've introduced and this is for creating a functional output from more than one binary we use the device tree syntax to describe how we want to be put together I can see some grins throughout the audience and I want to point out that making use of the device tree syntax to do other things is a relatively common practice especially in U boot when we're building a fit image for example which is something we've had for a long time now and why it's not part of the slides that syntax as well is device trees so the best way to explain this is by showing a couple of examples so what we use binman for now on x86 is to say that here is where we need to place each of the files that must be in the final image where they must go on the map that the ROM we have to have in order for the chip to work correctly we can say that U boot goes over here the blob for the VGA goes over here if it's using fsp that blob goes over there and we can pad things out as we need to so that in the end we get one file that can be written wherever it needs to go and work correctly another example here is that on all winter devices it is quite common to say that you want to have one file which contains both SPL and U boot itself this is helpful in writing to an SD card but this is extremely required when you want to say boot it over USB so we say that SPL part goes over here then we pad it out a little bit and then we place a U boot over there the final example of what this is really useful for is on ARG64 or RMB8 we can say that we want to need to put ATF over here we want to put U boot over there we want to put other blobs where they have to be within this file so that we can write a single image that will work on our RMB8 platform so K build and K config historically U boot had its own build system but for about three years now we have had K build as the logic in U boot so whenever you look at a U boot make file it's the same exact syntax as the kernel this also means that whatever tricks one would do in the kernel to for example build a single file or to get the .i file or to preprocess your assembler so on and so forth all of those same exact chips and tricks work same exact way here it is the same build system the other part of this is transitioning from having a header file that describes what features are going to be enabled to using the K config tools and syntax as the kernel this transition has been in progress pretty much since then our implementation of the K config language itself is in sync with the 4.10 kernel one kind of difference between U boot and our use of K config and the kernel and its use of K config or at least my the way I look at how the kernel has historically used it is that we want to have more logic in the K config files in terms of giving a correct default value for a given platform as opposed to saying that well we'll just have a default of zero or an empty string or whatever and leave it up to the person creating a new board def config file to put in the right value we do this to both ensure that it is more likely when a person introduces a new board that they will get something that works but also to reduce the size of our def config files and the final thing I want to say about K config is that with 4.10 the kernel has introduced a new keyword called imply which lets you say that if you have this one option set you really ought to have these other things enabled as well but it is still possible and easy to do for the user to say no I know better I'm customizing this board based on what I've done here I'm going to turn these things off but still have the def config say that well we're going to start with the EVM the EVM has lots of stuff we're going to make sure that all these things are enabled or our SOM says that we have the following features on it these things should be enabled but if your hardware that you're using the SOM with doesn't actually need it you can still easily turn it off some stuff that we have in progress historically Yubut has not had a driver model and that has something that has been changing over the last several years and our conversion of everything to driver model is not done it is making good progress but there are still obstacles we need to overcome because we do have binary size constraints and we need to figure out the best places to make the trade-offs between code reusability and still producing something that will function on our hardware additionally we want to make use of the device tree for finding out our platform specific data where things are addresses so on and so forth but once again we start to run into the problem of well if it's in the device tree how can we make use of that if it's something we need to have really early on and I was sitting in the Zephyr talk on using device trees this morning and went oh that is really awesome I need to try and work with that community as well to figure out how we can leverage that because that's really cool and will also help us solve some of our other problems that I'm going to be talking about because we need to figure out how to use both driver model and device tree and SPL but again SPL is something where we are often size can strange have a size constraint kind of a couple examples here are the smart web platform which is I believe an at mega 91 based platform where we have about 12 kilobytes of memory and another example would be the MIPS creator CI 20 where the ROM will only load 14 kilobytes and maybe even a little bit less there's kind of some experimentation of documentation says 14 but if you actually try it once a little bit less actually making good use of the device tree throughout is still something that is in progress and I am confident that we will work through the problems on to solve this includes moving from least potentially moving from a static device tree once we are up and running to a live tree that reflects changes that might have gone on as we booted up the hardware and notice that oh we're actually happened to be on this variant of the hardware we don't have this particular peripheral let's turn that off or we need to do these particular runtime tweaks because something is in one location and not another that is also however something we still need to decide if it is the best choice for us yeah so the next thing is once we've talked about live trees some more and there's also things that need to be solved out here in terms of working with overlays is once we've done all this does it make sense to say well we have our functional and correct live device tree let's just pass that straight onto the kernel or do we still need to load the device tree again apply logic and so on and so forth and then pass that over so now near term goals this is not a roadmap this is my hopes for where the project will be by maybe this time next year maybe a little bit sooner I really really want to see us finish migration 2k config this calendar year there's still a couple of challenges and hurdles to be sorted out there are some cases today where in our config.h files we have some logic that will be hard to translate but I am confident we can figure out a good way to do that one of the things that I've seen people talk about at various points is wouldn't it be much easier to have just SPL load up a Linux that can turn do whatever flashing or recovery needs to happen and then kexec into the real kernel if there are people that want to make this happen I am happy to offer assistance I really want to see us have more and in some cases better tests in our test.py framework this is again something that every time I am going to go ahead and push changes out to master test.py gets run on a large number of instances and all of that has to come back positive before I actually make the changes go live so the more tests we have here the better I would really like to strike up the conversation with kernel.ci kernel.ci.org again because there are problems in the kernel or at least in the community at large that we can help each other with for example in the dot six time frame the kernel introduced a change to their boot flags on x86 that broke us broke core boot broke anyone else that didn't happen to catch it at the time and if kernel.ci did have loops of all of us the different firmwares booting out of qmu running the kernel this issue would have been caught immediately rather than more haphazardly finally I want to find more time to answer stack overflow and again I see some well deserved grins out in the audience but while I have seen my fair share of bad answers and bad questions I believe it is important to be able to have enough reputation out there to downvote the bad answers and bad questions and upvote the good ones because it is much better for our community if when a new developer or user heads their favorite search engine throws their problem at it and they get a result on stack overflow they get a good answer not someone else asking the same question with no votes up or down and no answer to it and then the comment is that becomes documentation and it becomes documentation or it just becomes oh well this just doesn't work I'm going to move on and do something else rather than oh well I can answer that I know what you're trying to do there or I'm sorry that's a bad question please ask it in a different way so that it's more specific I also really want to expand our poverty coverage this includes having more things be built in sandbox by default maybe a little over a year ago now when I last went through and said well here are all the commands that I can easily build in sandbox let's update the death config that needs to happen again but by that same token it would also be really helpful if more people said that hey I would like to review the defects that poverty finds because that's already done the hard work of finding a problem I can read the code and provide a solution to that and that's a way I can contribute back to the community there's a lot of outstanding defects there I am only mostly certain that we don't have an egg on the face type bug as happened to grub when there is the oh here's how you get around that password issue that happened to them a while ago but who can say and with that questions one of the things that was always a problem with Corboo where all the binary blogs required by Intel to be able to do anything are you finding that they're more cooperative with you or what's the situation so the answer is that we work with Intel and what they provide to us as best we can is a signal that we're out of time so thank you for attending