 All right, it's half past. I have about 150 slides, so we should probably make sure we start early. It's my pleasure to show you all the amazing buzzwords you always wanted to hear in a single talk. You, if I grab and tube it all at the same time, isn't that awesome? So who am I? I'm Alexander Graf. I work for SUSE. My official position really is a virtualization developer. I usually don't get around to do embedded hardware things in my official assignment. Good thing is that I don't stick to my official assignments. So I end up doing a lot of ARM work, which is what is down here. I'm basically one of the founding members of the SUSE ARM team. So back a couple of years ago, four years or so roughly, we sat down together and moved up to a position where we said SUSE should be in the ARM business, which means we should also be in the embedded business a bit more and figured out ways and started working on things that made us move towards that path. And this is one of the outcomes of that work. So how does booting on ARM work? Booting on ARM is a story full of mysteries. It basically starts off with your boot ROM. So you have a boot ROM. Can you hear me well? All right, awesome. You have a boot ROM in your CPU, which runs very target-specific code. It runs, basically, your SUSE-specific code in there. That's what that symbolizes. And it's really only there to boot some other small stage of bootloader, or even bigger stage of bootloader. Basically, eventually, you're getting to some kind of firmware, to something that whoever puts, assembles a board, puts on the system to describe how that board works, to bring it up initially so that the operating system can take over. And that is also very target-specific code, because you're obviously doing initialized abort. And so you need to know what that board looks like. And then you have an operating system running on top, which may or may not be target-specific. And the handover part over here, that's basically the main piece we're talking about. That one today is either some custom protocol, external configuration files, VBoot, UEFI, I'm sure. There are plenty more out of there of ways how to get from firmware all the way down to your operating system. Now, me as an operating system person, I don't care about all of this over here, really. I mean, sure, I did patches, and I do work on it to improve it. But from a user's point of view, if I want to use an operating system, I don't want to have to care which board I'm running that operating system on. I just only want to plug it in and have it work and not worry about which hardware platform I'm on. This is me coming from a server and desktop background, I guess. It's very different in this audience. But this is the expectation that our users, our customers have. Also, all the stuff down here, all that craft, is very annoying if you want to have a single, if you really want to have a universal image, you can support every single boot method that is out there in that one image. It's sheer impossible. We tried, believe me, we tried. It wasn't doable. So we really want to have one single protocol and then just be done, have one operating system, have one single protocol to talk to your firmware, leave firmware to others, and we're happy, right? Then we are universal and we are all fine off on what we're doing. So to understand how we could potentially get there, let's take a look at what the UEFI boot flow looks like. So I'm gonna explain what UEFI is later, but the boot flow is basically the example, set example on how to do it generically because XCD6 solved all that years ago, right? I mean, they do know what they're doing if they want to boot Windows and every system out there. So you have your firmware, which is UEFI compliant, and that firmware then goes in and looks at NVRAM, which contains a boot list, a boot order, and it just has a couple of files that it can boot from on that boot order. So it goes to that list and tries to find the first file in that boot order. If it can't find it, it goes to the second file in that boot order. If it can't find it, it goes to the third file of the boot order. Important piece is the boot order is not just the device, it actually is a device and a file. Usually it's an actual path. Now if none of these boot pathes work, and that's important for us here, none of these boot pathes work, then we're in the fallback case, in which case we have something called removable boot, which was originally intended for things like USB sticks and CD-ROMs, where you did not know beforehand what you're going to boot. You're just plugging it in and it should work, which is basically what we're trying to do on ARM systems, so that you should plug it in and it should work. So this is really what we're trying to model. That removable medium stuff basically has a predefined file name, so it's per architecture. You have different file names that are default, but it searches on every one of those devices that it knows of, it searches for that file name. It's as simple as that, it goes in and searches to have that file name on my disk. No I don't, do I have it on my SD card? No, there's no SD card plugged in. Do I have it on a CD-ROM that's plugged in somewhere? Yes, I do, awesome. Next step. Next step is we're booting our payload. This is, just in case you can't see it, this is the new, amazing, better type logo of Grubb. So, crap. So we're booting our payload, we're loading it from our storage, we're booting it. It goes in and then that payload receives a thing called EFI one-time data. It's basically just a pointer, so it's similar to how Linux gets a device tree as a pointer when you boot it. In an EFI world, the EFI payload gets the EFI one-time data as the payload, as the reference it can use to then further on figure out what it should do. EFI one-time data contains really just four main things. It contains console support, boot services, runtime services, and tables. Console is self-explanatory, I would guess. I mean, everybody knows what a console is, but wait, there's one slide too much. All right, but the thing that we're going to use over here, oh no, that's not, sorry, too much animation, there you go. Using this one-time data, we now have a bi-directional communication channel between firmware and our payload. That's the important piece. So, you get this one-time data, our firmware, our EFI binary goes and uses that one-time data to have callbacks back into UEFI, so we have a bi-directional channel established and thereby we can callbacks, we can actually communicate with our firmware and make use of our firmware to do other things, like, for example, boot services, which provide us objects that we can use to access devices that our firmware knows about. So, EFI, for example, has awareness of what a block device is, right? And it provides you interfaces to talk to block devices using some special protocols, but basically using the boot-time services. So, this piece of code can now go in and read data from this block device, but it doesn't have to use the file system, for example, from your firmware. It can easily implement its own. So, for example, here, Grubb can just go in and implement its own buttoff as support, but use the block layer of UEFI to then say load a Linux kernel and run that one, which, again, is an EFI binary, in case you didn't know. Linux kernel is a UEFI binary. Everyone also gets the bi-directional channel to UEFI, and so that one's running as a UEFI binary now. So now we're getting to the really interesting piece of UEFI. It has this thing called Exit Boot Time Services, Exit Boot Services, where it basically just tells EFI go and kill everything that's not me because I'm taking over the machine now and I don't want you to control the hardware anymore. I want to do all that control. It's called Chiask on Power, for example, in case you've ever worked on that one. There's usually a call in film where the operating system can say, all right, now I'm taking over. So that one also exists in UEFI, but UEFI still keeps in memory, still keeps around a bit at least, some pieces, to provide runtime services. So runtime services are something that Linux can call into while it is executing. So you have Linux running and everything's there, but some pages in memory where you still contain executable code that Linux can call into to have other services provided. For example, so the three most important ones are in VRAM, you can modify the boot order. It has one real-time clock support so that you don't have to reimplement real-time clock drivers for every single system out there and your board can specify which one it is and how to access it. And it supports reset and shutdown services. So just like UEFI, I want to shut down the system and that's it. You don't have to implement that for every single system out there either. So those three are the main pieces in the runtime services. Now why is UEFI, I mean, why do we have any benefit from using UEFI over our traditional boot protocols that we do have on the embedded world, right? I mean, they're standardized, they've been there forever, they're good. Well, there's a few things that actually make life a lot easier for us. The first and most important thing to me is we basically create a bubble around our filmware thing there, right? We have a standardized interface all the way over here and create another bubble, which is what the user provides on the other end. That means that any value add that we have now, which is not in our filmware, does not have to be in filmware. We can just modify the thing that comes with our operating system and add something to it that was not in UEFI before. So UEFI by default does not dictate any file system support except for VFAT. Well, good luck booting of a VFAT partition, it works, but it's a really big mess if you're a standard stock distribution. We really want to boot from ButterFS, for example. So if you want to boot from ButterFS, we just go up into a VFAT partition where it can lie around and do whatever it wants. But that one just loads ButterFS module internally, uses that to load the kernel and we have booted off of ButterFS. Or we can boot off of ZFS. Or we can boot off of an X4 partition that happens to have 64-bit capabilities enabled with a broken Uboot. All of these are basically fixable by the distribution because the distribution is the one providing the file system, so why shouldn't the distribution be the one providing the file system driver just at the same time, right? Basically the rationale is you have a line known here where this comes from a completely different entity than this year and that's basically the main difference between the traditional embedded market that we've seen. One really, really amazing value that our product managers just love is this. You have a graphical boot menu, isn't that awesome? You can display logos. Yeah, I know, nobody really cares, but apart from it being graphical, having a boot menu is actually really, really useful because you can go in, edit the command line on the command line and just tag away, which is maybe not what you need for your typical embedded device which runs as an appliance, but during development it comes in incredibly handy to be able to modify the command line using just a couple of keystrokes, right? Another really interesting thing is that if you're thinking out of the box, the first guys that I've seen that adopted the UEFI boot was actually not us. I mean, I pushed all that code upstream, but the first ones that I've seen using it were the free BSD ones because they had the same problem. They need to have their own bootloader which then runs something off of ZFS or their own whatever, I don't know, and they want to be able to boot free BSD on ARM64 systems. So they just jumped on board, ported their bootloader to UEFI and they're using that on ARM64 these days, which is actually pretty cool. One thing you also should always keep in mind when, well, I mean, this is really important to us. It might become important to you is that this allows for compatibility. So the same image now runs on all those systems out there, right? Because we added that bar, we basically tried to move all of the hardware specifics out of our image that we're actually deploying. That same image now can run on a random server system or just the same on an embedded target. So you suddenly have an image that is universal and if you really do need to switch to a different type of hardware, you can depending within the scope and the limits of what you're trying to do, obviously, right? But it just makes life easier to switch to new things because you're not locked in that heavily. So what is UEFI? UEFI is a specification. I know a lot of people think UEFI is an implementation, it's not. It's a specification, it's a gigantic enormous document with lots and lots of interestingly written text that describes how to implement different interfaces. About 5% of these are interesting really and useful. And Intel basically started this as EFI and then gave it to the open, started a consortium called UEFI Consortium and the newer specifications are all called UEFI. So if I say EFI, forgive me, I really mean UEFI because UEFI is everything that people use these days. So UEFI has a reference implementation and that one's called Tiano Core. If you've ever seen an EFI system that's either Tiano Core based or some homegrown implementation of a commercial distribution. So I don't even know if AMI, for example, uses Tiano Core. I would doubt they do it. They might use some parts of it. But Tiano Core is the reference implementation that basically Intel uses to verify whether their spec is sane and to control that people can develop against something that is widely available. So why do we need another implementation of the same thing? Why do we need to have UBOOT suddenly involved as well and add the same interfaces to get another bootloader? I mean, what's the point? I see Tom grinning. I'm using that logo throughout. I just really found it very cute. So Tiano Core, the first and foremost reason for me personally, if you've ever looked at the code, it does not follow, how do you say this? It does not follow coding style guidelines that I embrace. How do you politically correct? This is the most readable code I found in all of Tiano Core, just so you're aware. I at first tried to find another function which didn't fit on the screen even, which did basically the same thing as that UBOOT function, which follows just normal standard Linux coding style. I mean, this is readable code. It's not the most readable code ever because we have to have these EFI entry and EFI exit macros to save a register. But apart from these, it's all just normal, linked together, proper code, just the same way as you would expect it to look like. So if you've ever wondered how EFI works, just maybe if you got stuck on reading Tiano Core code, try and read the UBOOT code, it's much easier. Then one really big, amazing difference that I found during developing it is that if you would be a normal developer and you want to call, say, something from one C file to another C file or module, something inside your code base and you want to call into some other piece of your code base, what do you do? Usually you have a symbol and you call into it, not so in Tiano Core land. In Tiano Core land, you first call a broker that gives you a handle that allows you to then do an indirect function call back into your other module so that you build amazing black boxes around every single piece of code that you're writing. Yes. This makes everything incredibly hard to understand. If you're trying to follow the path, what you're actually calling, it usually stops at the point where you're trying to do that indirect reference to some other module and you just have no idea where that comes from because there's no direct linking inside the same code base. Whereas in UBOOT, again, same as Linux, it's one big monolithic thing that just linked together which makes it much more readable and debuggable. You just attach CDB, you see function traces, everything just works the way you would expect it to. See, scope, done. So, and last but not least on the important pieces that make UBOOT very different from Tiano Core is that Tiano Core is meant as the core. It's even, I mean, it's even Cessar in its name, right? It's meant as this small piece that implements a few reference systems, reference designs, so that it shows that it actually can run something but it does not want to include every single board support in one code base. Instead, it wants you to fork it. It wants you to fork special board support into a different repository. Live now finally went ahead and actually got fed up with that and created something called Open Platform Package where he has an amazing five, I think, boards supported now. But that doesn't solve the problem that you still have to somehow merge code back all the time and you don't have one code repository where you can develop interfaces in lockstep and keep things internal because in Tiano Core nothing is internal. It's just they basically drove themselves into a corner where it's really hard to improve their code base. Whereas UBOOT, well, I'm just letting numbers speak but that board support is not even remotely comparable to Tiano Core. By enabling Tiano Core, you basically get yourself on a few systems out there. By enabling UBOOT, well, you conquer the world. It's a different league. So comparing those two, you can see that it makes a lot of sense to enable UBOOT because I want to, I like the UBOOT code base. I really like to hack on it. It's very hackable, it's understandable. If you need to modify it, you can modify it. I see a lot of reason to have a standardized boot path that allows me to do my distribution with the same way I always do it on other implementations of filmware but in a hackable manner, in a hackable fashion and like proper Linux-y kind. So what do UEFI interfaces look like? It was always abstract why you have these objects and you have things there. So UEFI interfaces, let's take a look at objects. So UEFI basically gives you objects. This is a disk object, a block object and a CD object and a network object. And every object is aware of its parents. So it has a full parent and child hierarchy. So this, for example, is an HCI controller and you have an HCI controller object that you can talk to, that you can poke through and find the PCI bus of that HCI controller and whatever. It's a full hierarchy with everything included. You can even have children there. You can have a file system object or file system driver creating object attached to your block device again and a partition layout object and whatever. Each of these objects have callbacks and properties, fields, whatever your name of the day is that happens both for block as well as for network. So each of them have callbacks that you can use to basically send packets or read data and they just describe some layouts like what is a sector size. Now in Uboot on the other hand, we do also have objects, but we don't have a tree. We don't have this hierarchy. We just have objects for specific end points which is good enough. You don't really need to traverse trees externally. But we also have these callbacks. We have every one of these objects that basically knows what its size is, like the block device for example knows its size, it knows sector sizes, it has callbacks to read things, it has callbacks to write things. It looks the exact same. All right, it's basically, if you leave away the hierarchy part, the actual objects that we care about, they're the same thing just with a different semantic. And what you usually do when you have different semantics, basically different languages is, you just write translations to them. You basically just write a small, you can basically translate from Uboot interfaces to UEFI interfaces very, very easily with a really little code. You just implement the UEFI callback and call the Uboot callback, and that's it. It might not have the same parameter names, but it's very, very little code. And all of that is in Uboot upstream called EFI loader, it's in slash lib slash UEFI loader. And it basically just goes and converts from one thing to the other. All there is to it. Very simple, very, very, very little code, really very little code if you wanna look at it. Now we can basically talk EFI, but how do we get this binary to even run? So we now know how to talk interfaces to an application, but we need to somehow get an application running as well. And that's where boot EFI comes into place. So we already have things like boot Z, boot M, boot I, whatever on Uboot. So now we also have one more thing called boot EFI. So what boot EFI does is it basically, or what you do when you wanna use boot EFI is you at first have your memory and you somehow need to get your binary into that memory from your storage. Just like you do with the normal boot flow that you have on boot Z, boot I, whatever you wanna call them, for normal traditional Uboot. So you take that, you use that normal load command which you would implement to take your payload and move it into RAM, done. Same as always in Uboot. It's the same thing, really. Everybody just doing that. And now what you can do is, now you can tell Uboot, go ahead and execute that binary with this device tree that I'm giving you. You can also leave out the device tree if you don't wanna pass the device tree. At which point Uboot goes in, runs the, once the, well in this case it's Linux, right? Once the Linux EFI stop, which then runs Linux and you're booting off. So it basically, if you're running this on a Raspberry Pi 3, which I just had lying around as a QMU target, it basically looks like this. So you would load something from your storage. You would load your kernel image from storage. You load the device tree. You modify the device tree so that it contains a working command line argument field in there. So just FTT shows and basically updates the current device tree and then you boot it and it boots up, it's running Linux now and there we go, now it switched to graphics mode and you're just booting up and obviously it doesn't find a root device because we didn't pass it any. But it's the same as any other boot method you have in Uboot. It doesn't differentiate between them. It looks the same. One really amazing thing that this allows us to do is that now Linux actually has a spidirectional interface to your filmware. So what Linux can do now is it can go in and talk to, for example, a random number generator driver in UEFI, which then allows it to relocate itself to a random location in memory and implement KSLR this way. So if you want KSLR support, you right now need to have UEFI support. And I'm sure there's gonna be more features like this coming where Linux just more and more depends on having the spidirectional interface to filmware available. So having a boot EFI path even in trivial cases does make a lot of sense because it doesn't actually give you any downside over a boot iPad, for example. Now, unfortunately as a distribution, I don't want to type load something from SD card and boot EFI that parameter there all the time. So there was a really amazing, there was some really amazing work in Uboot upstream called Distribute, like a couple of years ago, that basically gives us a standardized boot method. So Distribute describes just boot targets that it wants to boot from. So you have this variable in your Uboot environment. You can modify it if you really want to, you can just swizzle them, you can change them, save and then you boot from a different device. And for each of those devices, it tries to find a valid boot target boot file that it can boot from. So in our case, we're trying to boot from a block device. So it tries to search for an Exlinux con for a boot script and for an EFI payload. This talk is not about Exlinux or boot scripts. So we're only focusing on how the EFI boot then is implemented. So the EFI boot basically goes in and looks at your storage device. It tries to find if you actually have one at first. And if you have one, it looks at your partition table and it assumes that the first partition on the partition table is your EFI system partition. So if I put a system partition in case you've ever worked on an EFI x86 system, it's just a fat partition usually that stores your payloads, your grub for example, the thing I was referring to earlier. So we're using that same partition because in most cases that is your first partition. It can, the UEFI specification allows you to have it in a different place but you would just ignore that part right now. So we take that partition and then U-boot also has awareness of what your device tree is called. So there's a device tree dollar FDT file parameter. FDT file environment variable that describes what the file name for your specific board for a device tree would be. So it searches for that file on these standard locations so that if your first partition for example is your root file system which happens to have a slash boot path which happens to have a boot DDB path you could then just poke your device trees in there and it would automatically find them and use them. That's the idea at least. It also goes and tries to search for our removable media path. So you saw this thing earlier where you have the boot ordered EFI files and then later on the removable path EFI files and this is basically the ARM64 removable path which is the one I'm just using as an example but it also supports the other architectures. Yes please. The question is, is that signed? Let me come back to that after we're done. Signing is its own topic. Doable not implemented. So that binary can then for example just be our friend grub which distributes and executes and there we go, we have booted. In a nutshell, on a Raspberry Pi 3 example again because it was just there in QMU, it looks like this, you basically have auto boot, timeout, that one searches for a file. It finds one, it boots into grub, grub loads all its files, shows your graphics and well this is actually a genuine SP2 ISO booting off on a stock Raspberry Pi 3 basically right now. Grub then can load a kernel, can load an inner.d, can boot the kernel, can boot the normal operating system just the way it does on any other architecture out there. Same difference. Not gonna show you until it's gonna take forever, it's all emulated on X86 system. So we have SD cards implemented now, we can boot from them, so we're all good, right? Unfortunately, we distribute our distribution using ISOs. And a lot of people out there also want to do network boot so we also need to support these things. So how does this boot work on these? ISOs is pretty simple. There was a hack from, I don't know, I think it was, it definitely was before 2000s. From some PowerPC folks, I don't know if it was called FreeScale at the time already, who figured, you know, my ISO is really just like this big file which has El Torrito pieces in them which contain my payloads to boot from. So it can contain multiple of these, which means I really just have this thing that can be segmented into multiple other pieces which kind of reminded them of a partition table. So what they did is they implemented an amazing module into UBoot that basically just exposes an ISO as a partition table so that each partition is an El Torrito image inside of that partition table. Which means our first partition now is the EFI system partition which we can load our device tree and our grub form, done. So all of that is already in there since 2016 or three or so, so it's stable code by now. So ISO boot works, awesome. Actually the demo I just showed you was ISO boot. Now the only thing we have left is network boot. And network boot is interesting. So network boot, there are two different boot methods in Distro boot, there's Pixi boot and DHCP boot. Pixi boot is excellent or specific, so I'm not gonna go into that. DHCP boot, however, is basically what you would usually can refer to as Pixi boot or as network boot. So DHCP boot, the DHCP boot, your UBoot goes in and sends a DHCP request to a DHCP server. And along that also passes a hint, a token, that says hey, I'm an EFI payload running, I'm an EFI system running on this architecture using the VCI, that's the vendor, something specific interface in the DHCP request. So the DHCP server can figure out which file it should return. So returns a file name property in its act, which we can then use to download that from a TFTP server and, well, execute it, right? Done. Too fast? Good? All right, awesome. So network boot is all settled and done and upstream and works. It basically works the same way as it does on an actual Tiano Core system. So if you have, or an AMI, Filmware, whatever, so if you have a working Pixi boot implementation, network boot implementation for EFI in your network, it will just pick it up and work. So what are EFI tables? So I described earlier on that we have console, that we have console, boot services, runtime services, and tables. So we covered all of the other pieces. We didn't cover the console, but that's obvious. We covered what boot services are, that's all the objects. We covered what runtime services are, that's your NVRAM, runtime configuration, and reset and such. So what are tables? Tables are basically just pointers to binary blocks with a UUID prepended to them. That's all tables are. So tables is like this list of things that you just pass to the operating system and say go and eat it. You can have, for example, device tree in there, you can have ACPI tables in there, you can have SM bios tables in there. There are a bunch of tables that basically just describe things that you pass to the operating system as blocks. Now, UBoot does not today implement ACPI. If anybody wants to implement that in a UBoot environment, be my guest. I don't know why you would want to do that, but it's possible, right? You can describe your hardware using ACPI tables. If you want to test something or you want to verify whether your system works on both, it's doable. However, one thing that is actually really cool with those tables is that you're moving your device tree information all the way back into your filmware, right? Because filmware now has to populate your device tree before your boot loader goes in and selects anything. So this does not leave any excuse to device trees that are kernel-specific because you can only have one. You can't have different device trees depending on which kernel you choose. It actually is a good thing, I think, because it finally pushes device tree people into being compatible, yay. So another really amazing aspect is that, well, UBoot can also use device trees to configure itself. So one thing that you can do, and which is implemented today, is that you can just use UBoot's device tree, pass that one into the table, which then gets passed into Linux, and so Linux reuses UBoot's device tree. So you only have a single device tree to maintain. One, not five, one. It's much easier and it's much more in the idea of the original inventors. So now that we've heard what's all in there, there's a lot of stuff missing, obviously, because you can't always invent the whole word at once. So what do we have missing in our implementation? Well, the first and foremost thing is your NVRAM support. So the runtime services, they, oh, wait. So usually when you have runtime services running, Linux can use them to access your NVRAM while it is running, which means you need to have a device dedicated to a firmware that is not in use by Linux. Implementing that generically in UBoot turned out to be really hard, since a lot of these devices don't even have two storage devices that you can put things onto, depends on your price point. So I basically did not have any device there that I could store anything onto. Also, we don't have that interface. Sorry. We basically just don't have a boot order that you can change from the operating system, which really, in an embedded world, you don't really care that much about because you have a static boot order that you pre-configure anyways and you're good. But if you want to implement it, it's definitely doable. All the stops are there, all the functionality to do runtime services and implement that, it's all in UBoot, it's all generic and doable. You just need to find a device to carve out, make sure that Linux doesn't have access to it and then implement the pieces necessary to poke data into it, and then you're good. Another thing that's missing is, well, so we have this object thing, this bucket of lots of objects that we provide to EFI binaries. Now, usually in an EFI world, in a UEFI world, what you do is you initialize the firmware that creates all those interfaces and all those protocols and objects and then you run some other binary, some other EFI binary, which for example, in this case, is a butterfs driver. The butterfs driver can then go in and add itself to that bucket of objects so that the next binary that you're running can use that object and do something with it. In UBoot, we are recreating that bucket on every boot EFI execution. So yes, you can reuse your butterfs driver, you can run it. It will add itself to the object bucket, but the next time you're trying to execute anything else, it's gonna get removed because we're recreating that bucket. So this is one thing that is going to change sooner or later if we're finally starting to merge the UBoot object model with the UEFI object model so that every UBoot object becomes a UEFI object internally, too many use. Another thing that's missing is you have these objects, but apart from actual useful objects, I would say things that describe hardware and features that you want to talk to, you also have libraries in there. So you have an object that you can query that gives you a function callback blob with function pointers that do things like string LAN and string compare if you really want to have those. Now the EFI shell uses those. I don't know why, they don't just link things into their own binary, but apparently it was really cool to reuse code from Tiano Core, which we don't implement, so we don't have the EFI shell. If you really cared, you might want to implement those protocols, I never cared about the EFI shell, so we don't have them. You have to use your own EFI blob in the end anyways, so the EFI shell isn't that much used. In case you ever saw it at all, it's not a great shell. So why do you want to do all of this? I mean, you came here, you went all the way to look through these slides. Why would you even remotely consider to use the UEFI booting path in UBoot? Well, there's a couple of reasons why it's interesting to at least think of the idea. The most important one to me is the talent separation. So you can have people that work on your hardware specifics in a completely different department, company, people, whatever, you can actually separate between people who care about your hardware specifics and people who care about your operating system specifics. They can be separate entities, which if you have a mesh together, I need to have my bootloader and my kernel and everything as one thing approach that's really hard to implement. Also, one thing that I see happening a lot is that people add their value add on this side, right? So if you want to subtly implement a ButterFS boot support, you would go in and hack a ButterFS module into UBoot, or you would do a lot of other fancy things down here. So you have a lot of value add in the filmware side that really is not filmware specific or board specific, but more not necessarily operating system specific, but approach specific. Like you want to do something generically on your product line, but you want to do this for every single board. You want to have fallback boot on every board you have, right? But you don't want to necessarily have fallback boot implemented on this side because it looks the same on every system that you have. So you really want to have it in your operating system so you don't duplicate work. So what happens is that people start up with UBoot here and then they modify UBoot until it doesn't look like a UBoot anymore. Or they go in and invent amazing scripts that are five kilobytes long and nobody can review and read anymore and it just becomes a maintenance mess, right? If you move them over to the operating system side, it usually becomes much more maintainable and scalable. You can have even more fancy value add with an EFI application. So you can do graphics and all of that, right? You have full interfaces to it. You can move your hardware underneath of your value add. So if you suddenly want to go for a system with a stock AMI film where, well, there you go, just do it, right? It's the same interface. You don't bound to necessarily always run on UBoot. You can switch between different systems. If you suddenly need to switch to a different architecture, well, go ahead and do it. It's all generic, right? That same code runs on your XED6 system, on your ARM system. I've even seen a MIPS port of UEFI. You could do it there as well if you wanted to. And one day when I get around to it, I'm going to write a mainframe port. You can, for example, replace even more things. You can replace your operating system with a stock operating system, whereas, which is where I come into place again. Well, I come into place again. There you go. I come into place again. So this is what basically pays my bills at the end of the day. If you guys have the chance to use our code, it's a good thing for me because it pays my bills. All right. Or if you really wanted to, you can use non-linux there because it's still the same interfaces. If your value rate goes in before the freeBSD loader, you just run the freeBSD loader and it runs ahead. You're not bound to specific interfaces on how things work left and right. You finally have a chance to at least build on generic things on top of each other. Or if you really wanted to, you could use UBoot as your operating system and just build a value add that only ever shows a CAD running over the rainbow. So that you would never even run an operating system. You could write your bare metal operating system in a architecture and platform agnostic fashion. So why do you have to re-implement disk drivers to preload your five megabytes of payload on every single board? Don't do that. Re-load them generically and then go into your busy loop that just does whatever. There's no point in re-implementing the wheel every single time, right? So with that, let me come to the first question. Secure boot, UEFI specifies exactly how Secure Boot works. It can sign and verify every single piece of the chain. It's just not implemented in UBoot right now because I particularly personally don't care all that much about the Secure Boot chain. But that's just a personal preference. All of the code is basically trivial to write. All you need to do is implement the Secure Boot protocol and make sure that your boot EFI also calls into it. And the rest will just automatically work because every boot loader that is certified as being Secure Boot compatible is already going to use your UBoot. It's already going to use your UBFI Secure Boot protocol to verify against whatever that you are secure. So you could probably even go and call UBoot's internal shims to verify things and have all of Secure Boot implemented in, I don't know, 100 lines of code, 200. It should be trivial if you really wanted to. More questions, sorry. The question is, how do we get from a firmware that is used to really go away completely once you boot into your kernel to a firmware that stays resident to provide runtime services? What we have is we have a marker in the code, underscore, underscore EFI, underscore runtime, that you put into functions and variables that you need to have available during runtime services. And these are marked specifically in the EFI memory tables which get passed to Linux so that they don't get overwritten later on. So you also have to, the reason we don't just mark everything as runtime is that Linux can relocate your runtime code to any other location it likes in memory. So you basically need to have runtime relocation for every code that is runtime service capable. So we have this magic there that basically every variable that you would try to de-reference needs to go through a special callback so that we can find it again. But it's, just look at the code, it's much more explanatory for you, like I did. Question is, does it move in physical or virtual? It can do anything it like. It basically tells you, you were on address 5,000 before, go and play in onto gigabytes. It can do anything, it just relocates you. So Tom just said that yeah, right, the answer is yes, we have special persisting code similar to the PSCI code that also repersists itself because it has the same problem. It's running in EO3 and still stays alive during the lifetime of the operating system. Which by the way would be the way I would implement any runtime service. I would anytime, I would do a runtime service that is more complex. I would just make it be a PSCI call and then call into something that at least is aware that it stays in the same address space. It's much easier code to write. Yes. The vendors group up the definition of ACPI then have to kernel to put in another table to correct the error state if you could figure out what their hardware was. If they would release this command, they would work with it. Now, you know, when you're talking about the system, they'll large scale arms in multiple cores, that's the core of maybe now more. This situation makes a lot of sense. You know, when you scale up. Okay. The problem then is that I certainly hope you don't change new code and make it exclusively UEFI. No. Because eventually, you know, you're going to have to run itself in all ownership, which is not like a memory man, power manager, maybe a non-volta ram, a clock. And that's the extent of the hardware. The rest of the hardware is all user defined. You know, outside that, you know, my hardware. Yeah, which I'm using as a device. Okay, so the comment was basically that, A, people are afraid of using ACPI, which I completely concur. And B, why would you want to have all of the overhead of this if you have a really small system? If I get the right. For B, B is very easy to answer. I don't have current numbers, but back when I did the patches, the additional code overhead of having boot EFI support was 10 kilobytes of compiled code. 10 kilobytes on a 500 kilobytes binary. Negligible. It's basically less than what the compiler difference would be between different compiler versions. Having that support in there is almost no runtime overhead. And I mean, definitely no one time overhead if you don't use it. And no one time, almost no one time overhead even if you use it. And no, almost no code overhead. So overhead wise, there's basically nothing there. And ACPI, I personally don't see anybody implementing ACPI on this. Why would you? And even if you do implement ACPI, you would implement ACPI as a complement to a device tree based world. We have this so beautiful flow there where you can have you boot have the same device tree as the kernel. Why would you want to break that up really? I mean, this makes a lot of sense. Having ACPI is just an option. I don't like to block people into anything at all. It's very frustrating when I see people dictating, say, you have to use UEFI, whatever. And don't mean UEFI, but mean this specific implementation of UEFI by this specific company, right? Not calling names here. This is built around the concept of choice, right? If you want to use it, go ahead and use it. I see very little reason not to do it. But if you do see reason not to use it, go ahead and don't use it. I don't mind. But this also, keep in mind that there's almost no overhead to it. So we have one minute over. If you're really quick, it lives. Okay, so question is PSCI versus UEFI. PSCI lives in EO3. So it's, you basically have EO3, EO3, which is your on X86 speech SMM. Then you have EO2, which is Hypervisor, EO1, which is your system mode and EO0, which is user space. All of this lives the UEFI one-time services live in the same scope that your kernel lives in. So EO1 or EO2, depending on your kernel. PSCI lives all in EO3. That's the difference. All right, awesome. So we had two minutes over. Thanks a lot.