 Okay, I'll think we can get started. So, I want to talk a bit about why having open source firmware on embedded systems matters a lot. I won't go into specific projects in this talk, but I'll give you an overview of what specific functionality is now moving down into firmware and why we should care that this is open source. So, a bit about me. Some of you might already know me from talks I've given in previous installments of this conference. I work at Pangotronics. I'm a member of the graphics and kernel team at Pangotronics. We're an open source consultancy doing work for industry customers mostly. Yeah, as part of the graphics team, I have to deal with a lot of the details of the system because graphics hooks up into, yeah, all parts of the system to really work well, and so I have to deal with a lot of that. And I'm working mostly on projects that have really long maintenance time, so we just don't get forget about missing features of bugs because our customers will hunt us down to fix it. So, most of the things I use on my slides are mostly ARM specific terms, but the general concepts apply across other architectures as well. So, if you're running a MIPS, PowerPC, RISC-5 system, whatever, it's mostly the same ideas that apply to all of them. So, with embedded systems where there's no such thing as a BIOS legacy or something, the firmware on those systems was traditionally really minimalistic. So, to the point that we've even got around to calling bootloaders. So, if you have in mind something like U-boot, it always has done something more than loading the kernel, but in general what it does is really minimalistic in setting up the system, setting up the CPU, setting up some interconnects or whatever, and then loading the kernel and fully passing control to the kernel. So, in a traditional system, there's basically no interaction between the firmware at runtime with the Linux kernel. So, the Linux kernel might depend on some of the setups that the firmware did before the kernel started, but there was basically no interaction at runtime with the firmware anymore. So, the obvious upside about this is the Linux kernel is in full control of what happens on the system. So, there's no runtime services from a firmware that would interact with anything on the system and everything that's running on the system is under the control of the Linux kernel, which is really a great thing, if you think about it, because at least if we're talking about upstream Linux, the standards that are applied to the development process of Linux are really high. So, most of the things that are implemented in Linux are in a pretty good shape, and having all the system functionality being controlled by the Linux kernel obviously also helps in debugging stuff, because you only have to deal with one code base, and if things go wrong, you can look at all the pisers that are doing something in your system and how they're connected together, because it's all in one application, basically. And another upside of that is that the update story was pretty easy. So, if there's a bug in the system and you fix it, you roll out a new Linux kernel to your device and be done with it. Updating the kernel is probably something that most people at least have thought about how to do that on their device. Updating firmware is probably on a lot of devices a lot harder, and you're probably getting scared away from doing this, because, yeah, most of those systems are running remotely, so updating firmware is always at risk at pricking the system. At least if your hardware doesn't provide some really clever things to make that reliable. So what's bad about this model of having everything in the Linux kernel? An obvious argument is, yeah, there's a lot of complexity in the Linux kernel. All your system operations are controlled by Linux, so Linux has to deal with all the voltage regulators on your board with all the clocks and what not. So, there's a lot of Linux subsystems that are involved in controlling your system, and that adds quite a bit of complexity to the Linux kernel. But then I don't think that's really a good argument, because the complexity isn't there because Linux kernel devs just love complexity and love adding stuff to the system, but because the systems are actually complex, and so the software controlling them also needs to have a certain degree of complexity, and this complexity has to go somewhere. And if you're a Linux kernel dev, you probably don't care if the complexity is in the Linux as opposed to being hidden away from you in firmware. It's still complex, you just don't see it. But then there are some features that are hard to implement in a generic kernel, like stuff like early CPU bring up. There's always a bunch of workarounds that need to apply, or you need to toggle some bits really early after the CPU startup to make the system work reliable because there's always bugs in the hardware that don't get fixed and that need software workarounds. And some years back, the ARM world shifted from having a specific kernel for one system and you had to recompile your whole kernel if you wanted to target a specific system to an easier, a model that is easier to maintain by having a kernel that could run on multiple architectures, or sub-architectures from ARM. And some of the workarounds you need to apply to early CPU startup, you need to do this so early you don't generally have all the kernel interfaces available that you're used to. So differentiating between the different systems and then just applying the right workarounds to the right CPU and so on gets really hard. So moving this somewhere else that specifically knows on which system it is running would probably be a lot easier. And then virtualization kicks in. If you're talking about virtualization, you probably want to run the same kernel that's running on your bare metal system in the virtual machine on the system. So suddenly you get a whole set of different interfaces for the same tasks. Like bringing a CPU up is a lot of low-level stuff that's implemented in a bare metal kernel, but in a virtualized kernel, none of your virtual machines should talk directly to hardware. So you want to talk to a hypervisor that's providing all those services of bringing a CPU online or migrating stuff or whatever, or even shutting down the machine. So now you suddenly have to deal with two different interfaces for doing the same thing on the same system. So when people realize that this is a problem, the ARM world introduced PSCI, which is the ARM Power State Coordination Interface. And what it does is that it takes a really narrow set of things that are important for virtualization into an abstract interface, like bring up my CPU or shut down the system or something like that, and have an interface that looks both the same from the virtual machine and the bare metal system. So the interface is trapped based on ARM, it's a secure monitor call, and you can intercept that from a hypervisor. So it really makes virtual machine handling and bare metal handling look the same on the same system, which is great. So just to give you an idea, if you're bringing up a CPU for S&P system on the bare metal kernel, you just do a secure monitor call to firmware that has installed a handler for this trap, and then that's all the low-level ugly details that needs to be done for a CPU bring up. If you do the same thing from a virtual machine kernel, the hypervisor, KVM, for example, running on the bare metal system, can intercept that call and just emulate the thing, like bringing up a virtual CPU, spawning a threat, whatever. So it really looks the same, but the obvious downside there is you need to move functionality that was previously in the bare metal kernel down into firmware. It's not really that bad with PSCI because it's a really targeted interface for a really small subset of the kernel functionality, but still firmware vendors or hardware vendors that need to implement firmware on the system got this wrong. So there's trusted firmware now, which basically started out as a framework for helping implement vendors implement PSCI on their platform in a standard compliant way. It was started by ARM and it's now a Lunaro project, so it's not necessarily targeted ARM anymore. But then there's the first catch there. This project is BSD licensed. So why this is a really good thing to bring hardware vendors on board and not have any barriers for them to use this framework that allows them to more easily implement a PCI-compliant, standard-compliant firmware, it also allows them to not give you the source code for those firmwares anymore. So while I'm not aware of many vendors that generally don't open source the trusted firmware part, that is an actual possibility. So you can just close this down, have a part of really early system initialization and ugly details hidden away in this firmware project and if things go wrong, Linux kernel dev has no way of improving things. And then with PSCI, there's a set of problems. So while it really solves the problems that VMs have in some corners of the interface, it just collides with the real world, like always when you do an abstract interface and then you have real hardware implementing it or not even implementing this interface specifically. So there's some struggles there, like shared registers. So there's a system on chips that just have power controllers for all the power domains on the system and they're just implementing it in a single register. Why spread it out from a hardware perspective? But now you have a single register and the bits to control the CPU power control are in the same register as device power control. And with PSCI, we're just moving the CPU part down into firmware but the device power control is still completely under control of the Linux kernel. So now you have two different entities that are trying to access the same hardware for doing their thing. And I think everyone in the room is seeing that this can be a problem with... Yeah, you need some way to interlock between the two entities that are totally separate and that gets actually pretty fast. So you then would need to have to introduce some kind of interlocking with firmware and the PSCI part with system-specific knowledge and all the upsides of PSCI are gone because you don't have system-specific knowledge in your bare metal kernel again. Or even external interfaces in your system. So say you have your system on chip and it has some power rays for different things and they are provided by some kind of power management IC. And this is controlled all over I2C. So now if you try to bring up a CPU, maybe you need to enable one of those power rays because they were off before to save power. Suddenly a firmware needs to talk to this PMIC and needs to do this by using this shared interface that's probably mostly under control of Linux. So that's not really a good solution and you could introduce all kinds of weird interlocking between Linux and firmware, but this gets insane quite fast. So the only sane way or, yeah, so that's what I just said. A lot of the current hardware isn't designed for the kind of separation between different tasks that PSCI would need to have a clean implementation and it might not even be possible if you're talking about external board components. So the only sane way to get around this is naturally move more stuff down into firmware. So in the ARM world, we ARM introduced the system control and management interface as a standard for this, but there are lots of vendor-specific interfaces to do basically the same thing because chip vendors had this problem long before ARM got around to put a standard around it. So to get around all of those resource collisions that Linux had to use resources on the system that are under firmware control, we're moving all this stuff down into firmware. So all the device power and performance states are moved to firmware, so you're just asking to firmware to power on your devices or put them into the performance state and obviously performance states also need clock control to even be able to change the performance state. So a lot of all this stuff is moving down into the firmware. That's a good thing because you only have a... Or it could be a good thing because you only have to deal with a specific set of define interfaces to control all of this in your system and the Linux implementers work gets much easier. You have a standard interface and you talk to it and that's all the ugly details that need to be done on your system to control this. But then also firmware gets much, much, much more complex and suddenly there are a lot of runtime interactions between your firmware and your Linux kernel. So every time you power on the device or change a clock or something like that, you're going to call into firmware, pass control to the firmware and then let it do its thing and hope for the best. So because of the interactions between firmware and hardware, things can get pretty hard to reason about. So if you're hunting down a system malfunction, it's not as easy as looking at a single code base that's all in a single place, but now you're looking at the Linux kernel implementation and what it does, what it asks the firmware to do, then you're looking at what is the firmware actually doing or supposed to do, at least if you're able to look at firmware. So if this is all closed source, there's a lot of things that are getting really hard to fix because then you would have to guess what things are doing. So we're going from a model where everything is under control of Linux and it's really easy to debug because you can look at that stuff and license makes sure that we always can look at this stuff to basically being at the mercy of the firmware vendor and hoping for the best that they are open sourcing their stuff. So now you could take a step back and say, but I'm an embedded system developer. I don't even want to run virtualization and much of the things that we've just did and pushed down into firmware is driven by the virtualization use case. So if I decide just to not care about virtualization, am I able to gain back all this control and put it back into Linux? Maybe. So if no one on a specific system you're using is caring about virtualization, probably you're fine and you may be able to implement all this into Linux. But if anybody is caring about virtualization, you have basically two different paths in the kernel for controlling the same stuff on the same system and kernel devs are not really happy with this because it adds to the duplication and just edit complexity. And then if we are looking at more modern systems, you probably won't get around it because for now it was just an arbitrary software-defined boundary of moving stuff into a firmware running on the same processors than your Linux system runs on. But now we're entering an era where there's more complex system on chips that are really asymmetric in what they do. So a lot of the newer systems are looking like this. So you have an application processor part where your Linux system is running on and then you have some other maybe smaller but maybe even the same size than your application processor core computation units that could be used to offload stuff from the application processor where your Linux system is not really good at doing. So like doing real-time tests on this processor. But then they probably don't all have dedicated resources to talk to the outer world because that would be expensive and nobody does this. So you have a set of shared system resources that you can partition to those different cores. So in a typical system maybe your application processor is talking to the new art to talk to the Bluetooth stuff and you have in real-time coordinates talking to your SPI controller to maybe drive a high-speed analog digital converter or something like that doing some computations on that. And why you could partition the power first directly to the cores and be done with it. There's more shared system resources. So maybe the clocks that are driving your peer referral on your SPI core are sourced from the same PLL on the system. So now you have a clock controller that needs to be controlled by both of them. So Linux is just one user from the clock controller as well as the system running on the real-time core. So the solution that most of the chip vendors choose is to put in yet another processor and have all those system control tasks run on a different processor. So now you're moving to think that previous has been software partitioning with SCMI into a different system control processor and your Linux system needs to talk to this. There's no way you could move the functionality that your system control processor is doing up into the Linux system because then you would have all kinds of weird connections from this to this to talk to them and make sure they keep the clocks on while you're using them in the real-time core and whatever. So this is a nice solution to the problem, but if you move all this stuff into a separate processor, it gets even bigger to not care about open-sourcing that part. So the main takeaways from this or from the things I've just presented here are that firmware taking over more parts of the system control that has traditionally been under the control of the Linux kernel is here to stay. With more modern systems, there's no way around it and we as a community have to deal with that. But there's also a shift in the incentive for the chip vendors to actually provide this stuff as open source. As long as it was in the Linux kernel, the license just demanded the things to be open source and there's a really big incentive for chip vendors to use the Linux kernel because there's just so much functionality in there that they want for their systems and their customers and they just stomach the cost of pushing all this stuff out in the open. But with having a separate processor or firmware image, the incentive is not as high anymore. So some of the chip vendors are already backtracking there and are trying to hide away lots of the system control on those systems behind their closed source firmware. There's some good examples like, for example, Silencs with Zinc and P where there is a system control code processor but you just get all the code for this. And if there's a bug in there and if your Linux system is not behaving as expected, you can just look at this and try to fix it on your own. It might be hard because you have to deal with different entities and interfaces between them, but it's possible. So with things moving in a closed source direction there, we're really at risk of losing this control to fix things on our own and we're more moving into the PC direction where there's always been a closed source firmware and you just had to deal with it. So we should try to push vendors to provide open access to firmware and be aware of it when doing system decisions. Basically, if you're planning a new project, be aware of this and look up if your render is providing open source firmware so you can actually do the things you're used to with the more traditional and better approach. Okay, that's it. Questions? Yeah, so I was just wondering, one of the reasons that you gave for the incentive to open source code when it's in Linux is because Linux offers lots of functionality for a complex system but at the same time you've said that firmware is going to get more complex for the things it's managing. So do you see that the incentives of open source firmware will increase as the tasks that it has to do become more complicated? I don't know yet, to be honest. So some vendors are getting the idea that cooperation on complex topics is less risky and less costly for them, but some of them don't yet. So I don't know really into which direction this will go in the future. Okay, thanks for your attention.