 I'm Eugene Sermetnikov, and I will talk about the Intel X86 CPU microcode packaging in Rival. And how we did this. So I work as a software engineer in Rival, in Red Hat. And I'm responsible for back porting various drivers, as a part of the driver of the program. And I'm also responsible for maintaining out of three Canon module packaging construction in Rival. Also, I'm a stress developer, and unfortunately this conference doesn't have anything related to debugging and tracing, so I'm not talking about it. But rather I'm going to talk about something else I'm doing as part of my job in Red Hat. It's packaging Intel X86 CPU microcode for Rival. So it's a pretty straightforward agenda. I'm going to elaborate a bit about what microcode is and how it's used and how it's updated in Linux. Then I'm going to deep to dive in Rival specific aspects of microcode update package. So microcode is a set of instructions. It's basically a code that is run inside of an ASIC. Usually it's related to CPUs that implement some externally defined instruction set architecture, but it's run on the instruction set architecture implemented in the ASIC itself, which usually more simple and more straightforward. But also it allows some degree to some degree control the way it is run, especially after the ASIC has been taped out. And the process of changing this behavior after running processor is referred to microcode updates and with respect to Intel specifically, it's been available since P6 architecture that is Pentium Pro, even though it's well used by Intel for its processor internally, mostly for like testing, built-in bring-up, built-in self-test and some handling of FPU, corner cases stuff. But yeah, it became more prominent over the years, especially Intel after some recall-inducing bugs and such as FD and F00F bugs. And yeah, it's uploaded onto the CPU by writing the address of microcode update data to specific machine-specific register. It's, so that's the one, seven, nine. And usually it's pretty boring because, well, processors have bugs and most people don't know about them. The ones I've mentioned are well-known, especially because they haven't been fixed. But since 2013, for reasons, it became more prominent and more of interest of general publics. So yeah, it probably weren't a talking about. Microcode itself, microcode update itself on Intel X86 CPUs is following forward. It is prepended by 48-byte header and then it contains encrypted microcode update data itself. And then it probably can have, it possibly can have some extended signature stuff after the data which is, even though described for like 15 years now, started to be used only with ultra-wake CPUs to describe and some latest-atom CPUs to describe the additional signatures that the microcode update is applicable for. Yeah, the most identifying part of a microcode update is the CPU signature what the microcode update designated for. It has a kind of weird format since CPU signature a bit of a volt over time and it's not as publicly CPU models are referred as. Like usually it's like a family model stepping triad, but the CPU signature has the bits mixed up and some of that are not like mixed up but actually added and not always added but only in case the base family is F and all that things to Windows NT that checks on checked only the first, the lowest three bits of the family. That's why Intel has to bump the family instead of eight use the last available four-bit number that is 15 and to get even more CPU families which Intel used only for the later Italian iterations, namely for Italian too. They decided like there is also external family that is added with the base family value if the base family is F. Fun stuff. Yeah, there is also some ways to discern processors that are designated for specific market segments like for server market or for mobile market it's not the processor type field which is originally used for designating specifically overdrive and double socket system processors but an additional field that is present in or yet another machine specific register and contains this three-bit value that specifies the segment the processor is designated for and also the mask in the microcode header specifies what kind of segments this microcode applies. So far this distinction was only useful only for IML. Ember Lake processors, yes. And so with regards to Linux the historically there was one mechanism that was used for CPU updates that is just piping the data into a special file special device file and then when more sensible generic firmware loading mechanism has appeared it microcode loading has been converted into it. So you have basically a special file in CFF that allows triggering this microcode update by writing there one and then the kernel using request firmware mechanism we'll try to get the new microcode that is now treats like exactly like any other firmware. In addition to that, there is also the so called early update mechanism that is executed early in the boot stage when there is only one core only one CPU has been brought up and it uses either the data that is built into the kernel itself which is not used in real or the data provided into the in-e-traum effects and more specifically the specific uncompressed part of the in-e-traum effects. So how it is packaged and why it is packaged separately because as I've mentioned it's been treated by the Linux kernel more differently than any other firmware and why not just package it in the Linux firmware package where all the rest of the firmware is related. The first reason is that historically it was different and the first reason is basically it was historically more in this way. The second one is that while it's the process of publishing microcode by Intel is more straightforward and more open than it used to be, it's still published separately and it's not a part of the Linux firmware package and as I mentioned since 2018 it actually turned out to be more useful that way because instead of updating huge multi-hundred megabytes package you all get to update on the packages several megabytes in size. So the packages named microcode CTL and so there is no more microcode CTL binary that were added in uploading microcode to the kernel because originally it was some weird text format that was used by Intel for distributing microcode and kernel accepted the binary firmware basically the one that is used to upload the microcode to the processor and it supported on various Rails. In addition to the microcode update files themselves, the package contains also some caveats which I will talk a bit later, some scripts that related to microcode updates and scripts and tooling and some documentation and also there are some RPM scripts that probably was mentioned. So what's caveats? Caveats is a homegrown mechanism of treating specific microcode updates differently based on some information about the system and it's originally coined by the way the directory with Broadwell microcode has been named because there was an issue that prevented it from loading on CPUs with early microcode revisions because basically there was some walking, quaking and without specific Linux kernel patches it was led to HAN and it rolled a bit when we have started to get microcode updates regularly and some of them wasn't really good, weren't really good. So the caveats itself is not a rocket science basically a specific microcode update and the configuration file that describes when it should be or shouldn't be applied and also some disclaimers that is provided, a message that's provided to a user and some readme files that tells him or her what to do. So what is contained into a configuration? It's some straightforward stuff like to which kernel apply and to which microcode revisions apply this microcode update, but recently some more tricky stuff has been added. For example, this additional CPU segment value that is yet another CPU segment not related to the previous one that is resided into MSR value. This one is resided into PCI configuration space of one of the virtual PCI devices present as part of the platform. And yes, also sometimes we need to parse DMI data because we won't restrict microcode loading on the sound specific systems. Yeah, the caveats infrastructure contains all the scripts that will tell whether a caveat is actually or not whether a microcode update should not be applied. Some script that populates other ways inside the leap firmware directory. And some scripts that will basically almost triggering microcode update, wait microcode update. But also checks for some weird things like whether we are right into a virtualized environment or not because there used to be a real kernel bug that will hanks when we are trying to update microcode into certain environments. So in order for user to express its preferences with respect to certain caveats, the most user-friendly interface has been chosen. It's like placing random files in random places. And since it's just random files, there are also some additional actions that has to be done in order for this preference to be enacted. Or, well, it can, this preference, since these preferences are taken into account on top days, well, just wait for the next update. Another part of the infrastructure is a Drakat module because Drakat has some, its own logic with respect to generating a new term of S and also responsible for generating only a new term of S where microcodes updates are placed. This module also affects this logic by generating the set of directories where Drakat should be sourced by microcode updates. Basically, this aforementioned microcode this overlays, which Drakat has no idea about. And to give some overview, there is a list of caveats that we currently have in rel. So the infamous Broadwell issue, which is still hitting us, despite the claim by Intel that it hasn't fixed, there used to be a Sandy Bridge issue that basically there was some issues with using the newly introduced where WBH were on the MDS patched microcodes revisions. The other one is this loading microcode inside the hypervisor issue, basically which forced to moving basically all the metacode files into the caveats. Next one is, yes, the Skylark server. Server, Skylark non-server, Kaby Lake, that is specific to Dev only because only they requested and it was only observed on those systems. And the earlier Tiger Lake update. Okay, what is contained in the rest of the package? It's basically a set of scripts that decide when to trigger any time of S regeneration. And it has some history. Basically, there was some back and forth with the guys when to update in Netram-FS and for which kernels. Originally we tried to regenerate all, well, after we decided that it's good to have new microcode updates being placed into Netram-FS, into microcode update, package update. But it led to issues with drag cut and with the fact that it takes eternity on some systems when there are a lot of kernels installed. So in the end, we ended up with this weird behavior that we update the kernel we are running on and the three, up to three kernels that are newer than the kernel we are running on, which allows us to update also the Netram-FS to the kernels that have been installed after the microcode update or the, together, more or less together with the microcode update. There is some documentation. Basically, the ones that, on one side, document caveats and on the other, the documents microcode. Yes, also for those who want to figure out what microcode revisions are, part of the specific microcode update package, there is ways to do that, as well as some information that is installed as part of the package itself. So, yes, there is some difference between packages between RL6 and later revisions, basically RL6 stuck with the old microcode bit mechanism and the seven to the newer one. So despite that, he still has had some legacy stuff like the microcode TTL program or like converter program or well, having been architecture specific until row nine, but yes, it's mostly minor stuff. The release process is more or less straightforward. The only difference is that when we have embargoed stuff, we have to package a non-public turbo and then have updated it after the update as the update is unambulgarated. And the testing is performed both automatically and during the build and many are during the release to try to catch as early as possible with the possible issues. So, there are a lot to improve in the packaging, in the packaging specifically, it would be nice to not mess with Dracad and probably to make everything more transparent and get rid of all the weird stuff that can accumulate over the years. But yeah, there's probably some resources if you have more interest in this. Yeah, do you have any questions? Yes, yes. Basically, so the question is whether microcosm package handles or not AMD MicroCog. Yes, the package contained the AMD MicroCog updates in row six and before. In the six, row five, but it only contained it. It didn't do anything with it. And the only reason that it was there is basically because AMD has provided the microcog separately but after some time, they start to do it as part of Linux firmware package. Linux firmware repository which is spent on the Linux firmware package. Could you expand a bit more on the versioning for Elate? Like how it is all these three issues? It's a bit weird because the original idea was, well, to provide information about the microcode version you're installing in the package name itself. But it didn't work well with the fact that we don't do rebases in that stream. So basically, we have to carry both the version we released on the GA, general availability of specific RL minor release and the version we are actually packaging. So we have this pair of versions of which you're probably interested only in the latter one which is provided as a part of RPM package release. Does it answer your question? So how did it turn great for the system releases? That's the part. You mentioned that it didn't great for the system releases. So how do you mean that? Well, I mean that basically there is specific process for rebases of packages and basically rebases in that stream are thrown upon, I would say. So yeah, basically what we did is just providing it as a release to a hack. Probably I will ditch it and just provide it as a rebase but let's see. The issue is that basically we always do that under embargo and there is not much time in the wiggle room in fighting the process. But yeah, in general it's probably better to just have it into the, like just one version. The other thing that was, I was originally because Afriet is updated in minor RL versions that well. You can get in the stream the version that is not released into the newer version. But yeah, it's a thing that it's not a major concern.