 A very good afternoon to all of you present here. My name is Megha De. I'm a software engineer with Intel's open source technology centre. I work in the core Linux kernel team. I'm currently also the maintainer of the Linux UEFI validation project. We'll just be discussing about the project in a moment. Hi everyone, this is Sai Pranit Prakya. I work in Intel open source technology centre for Linux core kernel team. And I do write patches when I find some issues with Linux UEFI subsystem. And I also help senior kernel engineers in our team to validate their code. And I also am an active contributor to Linux UEFI validation project, which my colleague Megha is maintaining. So today we would like to highlight to the audience the impact of platform firmware, especially or specifically the UEFI on the Linux kernel. So let's just go over the agenda. We'll start off with a brief introduction to UEFI and then try to understand how buggy UEFI implementation and bugs in the Linux UEFI subsystem effect system stability. We'll then move on to introduce the Linux UEFI validation project, its motivation and various features and how we can make use of it to make our UEFI systems Linux ready. So now I'll be handing over the mic to Sai. He'll be doing the first part of the presentation. Thanks a lot Megha. Quick question to all the audience over here. Can you please let me know how many of you have heard about UEFI before? Great. Sounds great. Yeah, quite a few. I mean I guess most of them. Can you please also let me know how many of you have worked on or heard about UEFI subsystem in Linux kernel or maybe a firmware engineer? Mostly platform firmware. Great. Great. We have a good combination over here I guess. With that, I mostly work, I mean my team at Intel is mostly concentrated on PCs and servers as well. So UEFI is kind of common for us but then I thought maybe it's not so well popular in the embedded space. Then I did a quick research and then I went to UEFI forum website and then I saw who are the members of the UEFI. And then I found as usual all the biggies Qualcomm, ARM, HP, Lenovo, Dell, IBM and of course Intel. And I also noted that there were 13 promoters, 45 contributors and I guess 250 collaborators and then 20, I mean around 20 or 25 individual contributors. I guess that's most of the semiconductor industry. So I guess that's a good scope. There's a good scope that UEFI is already present in embedded space. And I also noted that the first UEFI BIOS, I guess it came into embedded space around 2005. So with that, let's see what a UEFI is and I mean what a UEFI based platform actually means. UEFI is a specification which tells how a kernel should actually interact with platform firmware. So if that's a little bit confusing to audience over here, we can take an analogy of a car. Let's assume that you went to some car rental herds and then you wanted to rent a car. So assume that I think that you as a user are an OS or a kernel and the car has a firmware. So what's the interface over here? The steering, accelerator, brake and pedal or honking system. So that's the interface and a document or the user manual which actually describes the interface is the UEFI specification. So that specification basically tells a firmware engineer what a platform firmware should offer to a OS or what is visible to the OS. And from an OS or kernel guy, it's what a platform is actually giving him. So the specification is the most important document and the whole point of the presentation is if the firmware is not implemented perfectly or if there are any bugs in the firmware or in the Linux kernel subsystem, it can cause system stability. It can make the system unstable. And we'll be seeing, I will be briefly discussing four issues we came across when debugging issues related to UEFI and how we can solve those issues and what we are doing at Intel. With that, let's assume, let's start a story. Just assume that you have powered on your new PC and maybe firmware engineers here might not like my simplistic explanation of firmware but to make things very simple, let's just assume that firmware is giving us one simple thing and let's call it memory map. So when you power on the system, firmware is going and actually enumerating all the memory that's available in the system. It can include RAM, it can include MMIO addresses and it can include spy or flash where the BIOS code actually lives in. Anything, so any memory that is actually addressable will fall into memory map and that memory map is actually passed to OS so that OS will let OS will get to know what it can use for its purposes. So that's one important thing which platform firmware actually offers to OS or any OS loader or any third party application, any software application that runs on top of firmware. That's also present in legacy BIOS. We call it as EA20 map. And what's different, what's the interesting part of UEFI is it actually offers us services. Let's just think of the services as APIs and we can broadly categorize the APIs into two types. They are called boot time services and run time services. So why is there a distinction between those services? Because the way they are present and the different purposes they try to solve. So yeah, assume one simple case. Platform firmware has already booted and then you want it to load an OS loader on it or you want it to load a kernel. So what are the typical things you do? Allocate some memory and load the image over to that memory and then execute the image and once the image is executed just free of the memory. So these are the APIs that a platform or OS loader requires only when it's executed and after the kernel has taken control, platform firmware shouldn't provide these services anymore. So these services are present only till the OS has taken control of the system. Once the OS has assumed control of the system from platform firmware, these services are not needed anymore. So UEFI platform firmware can release those memory regions used for that purposes. So the APIs that are present only before OS are called boot services and the APIs that are present before OS and after OS are called run time services. That's the interesting part. I guess legacy bios will not offer anything like that. So UEFI services are the important part. So the key takeaway from this slide is platform firmware offers us two main things. Memory map and then services. Let's see. Let's continue our story again. Let's see what happens when you actually power on a system. So assume that you have already got the memory map and if you see here, assume that this is our memory map and then it's divided into many regions. So memory map is basically an array of structures. So a structure that is describing a part of memory. It has a base address and then it has a size and it describes what type of memory it is. The main important point here is that when the hardware or the platform is actually booting, all the memory is owned by firmware. So firmware has all the ownership of the memory. It can do whatever it wants with the memory. It can go and read from a region. It can go and write to a region or it can allocate some pages to OS loader. It can free some pages and all that. So platform firmware completely assumes the ownership of system memory. And moving again, we have this kind of an interesting EFI runtime service that's called exit boot services. Its purpose is slightly bit different. It is a signal from OS loader to platform firmware. So the OS loader is basically signaling platform firmware saying that, hey firmware, thanks a lot for creating an environment. Beautiful environment so that I can load myself. So assume that OS loader has already loaded VM linus and the Inetati kernel and Inetati are already present in the memory. And then OS loader is trying to say platform firmware that, yeah, thanks a lot for providing me all these boot time services. It's time for you to shut them all down so that the platform firmware goes up, frees up all the memory it has used. It actually shuts down all the console services it provided or all the boot time or the drivers execution that were actually used to boot from USB or HTTP, etc. So all the boot time services are gone now. And if you see in this slide, so memory, at this point of time memory is actually owned by bootloader. Or you can use EFI stub in kernel to while loading. So all these regions except runtime data and runtime code regions. So as I already noted that runtime services are present even after the kernel has taken control. So all these regions, runtime data, runtime code and reserve regions, these are reserved by kernel. Kernel will not touch these regions at all. And all the other regions are a fair game for the kernel. So kernel will load itself into any of these regions. Kernel can decompress itself there and then kernel can use the other regions for stack heap. It can do basic initialization from there and all. But, yeah, the important point take away from this slide is these regions are untouched by kernel, runtime data and runtime code regions. And let's proceed in our story again. So assume that kernel has taken control from platform firmware, bootloader and platform firmware. They are gone now. And then kernel started basic architecture initialization. It assumed that console is initialized. And then moving forward, assume that it has initialized memory management as well. So what happens when memory management is initialized? We, all the access to memory are basically performed in virtual address space. So there is no physical address anymore. So if you see the kernel has set up its virtual address space. This is all the kernel virtual address space. And notice that it's actually having virtual addresses instead of physical addresses. And in the kernel virtual address space, we allocate 64 gig of space, 64 GB of memory, just to save, just to create mappings and save all those to the runtime data and code regions. So if you notice here, reserved runtime data and runtime code regions, they are untouched by kernel. So we just create mappings, virtual address to physical address mappings for these regions in kernel. And the important point here is kernel also calls a runtime service called set virtual address map. It's another way of signaling to firmware, saying that hey firmware, you should not use your runtime services anymore in physical mode. You should use all your runtime services in virtual mode only. So that's the key point, saying that anyone, anyone who wants to use runtime service, either a user process or a kernel or firmware should have, should access all those regions in virtual address mode, but not in physical address. So what happens when something like this which is specified in the UFS specification is basically ignored or just overlooked by firmware. It causes system instability. And let's see, let's see how that actually happens. So when I was talking about boot time services and runtime services, I just skipped runtime services. So I gave you a few examples about boot time services like load image, allocate memory, free memory and all that. Let's see an example of runtime service. Okay, I like, I like dual booting my machine. So assume that I have Windows and I have Ubuntu as well. And then I booted into Windows, somehow I didn't like it. So I wanted to boot back again into Ubuntu, but I don't want to manually intervene. So what I actually do is I just ask Windows OS to set me a variable. It's called boot variable. And Windows OS goes and requests platform firmware to set the variable to Ubuntu so that when I reboot my system, the system directly reboots into Ubuntu I don't need to press F2, keep pressing F2, change the boot order to Ubuntu and all that, which usually happens with legacy bios. So assume that I've requested this Windows OS to do some change like that to change the boot order. So it calls a EFI runtime service called set variable. So set variable, what it does is it's an API. So the code is living in runtime code region. And the boot code, boot variable, which I'm talking about, should actually live in runtime data region. And when Windows OS has actually requested to change the boot order, it should go and change the variable which is in runtime data region. But if the firmware is not implemented perfectly, platform firmware can actually ignore it and try to access the variable in physical mode instead of virtual address mode. So earlier I was saying that when set virtual address map is called, OS is actually indicating platform firmware saying that you need to do all these accesses in virtual address mode and not physical address mode. But if the platform firmware ignores this suggestion and if it still tries to access EFI, anything related to EFI or any memory region in physical address mode, it can lead to disastrous things. So a quick question over here. What do you think it will happen if firmware goes and tries to touch memory region over here? Any wild guesses? So assume that set variable, it's a firmware runtime service and it wanted to access the memory region which is actually falling here somewhere in kernel virtual address space. So the answer here is it actually causes a kernel panic. The bug can actually manifest in two types. It can happen during boot time of kernel which actually causes kernel panic and if this bug happens when the kernel is up and running, it hangs the kernel. So you need to reboot your system essentially to get it back and working. So let's see some practical example of how this bug has manifested. So as I said earlier, I've called EFI set variable and then it has went and accessed an address. Notice that although it's a virtual address, but it still is falling in some user space and that address isn't mapped in kernel virtual address space. So if you see set variable accessing this address, this address is unmapped and if we go further and debug why the region isn't mapped in kernel address space, you will see that the memory region is actually a part of conventional memory in the memory map that was passed by firmware. So earlier I was referring to a slide wherein I said that the two main important things offered by firmware were EFI memory map and then runtime services and then boot time services. So this is the memory map that firmware has passed and then it is trying to access a region that is part of conventional memory. So according to UEFI specification, when the kernel is up and running, firmware should actually dereference only addresses which are part of runtime code or data regions but not any other regions. But firmware sometimes ignores that and tries to access other regions. And why does that cause a panic? Because we are executing firmware code in ring zero, not in ring three. We are not executing firmware code from user space. Firmware code is assumed to be kind of proprietary or maybe security reasons I guess and we execute it in CPL zero. So yeah, we all know that if a page fault handler happens, if a page fault happens while we are in user space, there is kernel to take care of the page fault. But if a page fault happens when you are already in kernel, there is no one to take care of it and kernel just keeps up so it panics. So yeah, that was one issue wherein imperfect implementation of firmware could actually cause kernel panics. And let's see, this is actually the most difficult problem but I kept it in later. I kept it as second issue because I didn't debug it and it's pretty more difficult and I think it's more involved. So the issue started out like this. It's a nice story. Some PC enthusiasts, when they tried Linux distro, I guess Ubuntu or rel on some Samsung laptops, they started reporting and telling that Linux is actually bricking their systems. So what do you mean by bricking their system? The system will not boot to BIOS anymore. You need to take back the system to some OEM, reflash it with a new BIOS and never boot Linux on it again, boot only Windows. And that's it. So that was the workaround at least at that point of time. And then Matthew Garrett, he's a great kernel hacker. He went ahead and then tried to investigate the issue and he found very interesting results. So he found that the issue was not specific to Linux. Issue can be reproduced on Windows as well. But why did people start a link on Linux? Because it's very easily reproducible on Linux. That's the only thing, but yeah, it's not just a Linux issue. So what were his findings? After many trial and error method, he found the final result. He said that, okay, there was a driver running and then the driver causes some hardware error. And Linux has this nice feature. We call it a speed store, persistent storage, where when something bad happens in the system, the log is basically written or the demessage is basically written to some persistent memory so that you can do post-mortem analysis on why the crash has actually happened. So here the driver triggers hardware error and then kernel goes and writes the log. And here the persistent memory is where the BIOS code lives. The interesting part here is it's actually... So EFI has a space for writing variables. As I said in the previous slide about boot variables and all. So kernel can go and write other variables too. Kernel can use that space. It has all the right. So when kernel went and tried to store this log over there, it was fine. Writing log over to that UEFA memory wasn't an issue. But it's the second reboot that caused the issue. So when kernel, Samsung BIOS, when it tries to reboot the machine the second time, it checks for how much space is available, how much free space for variables is actually available. And nobody knows how much is that much. I mean, how much Samsung driver is actually checking or Samsung platform firmware is actually checking for. Is it just 50% of the free space or is it 10%? Nobody really knows that. So it's just a imperfect implementation of Samsung firmware that was breaking systems. So how did Matthew Garrett confirm that it can be reproduced on Windows 2? He went ahead and wrote, I guess, 32 variables of 10k size. So something like that. And then he was able to break the machine from Windows 2. And then he confirmed that it's not just a Linux-specific issue, but easily triggered because Linux has this nice feature of Pstore, which without any user request goes and writes. So till now we have seen two catastrophic issues where an imperfect implementation of firmware can break the systems. That's very catastrophic, I would say. And then little bit, I mean, very bad is panicking the kernel. Let's see some other simpler issues. This time it's not firmware fault, but it's a Linux bug that has caused Linux to panic. As you know, UEFI supports many, many features. It has lots of tables. It describes lots of tables. And then firmware engineers have this flexibility of using up tables only that are necessary for their platforms. They don't need to implement all the tables that are described in firm UEFI specification. So one such table is BGRT. And then, as you know, Linux is a very active project. Going on, there are so many developers contributing patches. And then there is one big huge change to UEFI. Matt Fleming, he's a great kernel developer too. He was a UEFI maintainer. And then he made this patch of changing UEFI, I mean, accessing UEFI runtime services through UEFI PGD. It's a basic shift of how you access UEFI runtime services. So what happened is with that patch in Linux kernel, when we booted the same Linux kernel on platform firmwares that are supporting BGRT table, kernel just panicked. And as well, I mean, I started debugging the issue and then found that it's actually a Linux issue. And it's easier to fix when it's Linux issue because we just send patches. And then if it's really a bug-fixed patch, kernel panic patch, it gets reviewed soon and then merged. So not a big deal about fixing the issue. But the big deal here is how did the developer miss this? The reason for this is it's impossible. I mean, as a developer, you cannot have access to all the systems and you cannot actually assume that every system you have access to implements all the UEFI-specific features. So can we provide some value over here? Yes, we can. What I usually do is I use OVMF, open source virtual machine firmware that boots on QMU. QMU is a virtual machine. So I go and hack OVMF and then enable features. So in the same way, I have hacked it and enabled BGRT. And when I tried kernel on it, it panicked and was easier to fix because it's virtual machine. Same way. So these were kind of disastrous issues, panicking kernel. And then let's talk about a simpler issue. These ones are kind of difficult to parse. And I mean, difficult to realize that there was an issue when kernel has booted fine. The issue here is when I booted the same kernel on two different machines, the platform firmware again supports a special table called ESRT, EFI System Resource Table. I saw that on one system kernel was able to successfully parse the table. On the other system kernel wasn't able to successfully parse the table. It somehow failed. The issue here is we don't know if it's a firmware issue or a kernel issue. Again, the problem here is you don't see kernel panic. You don't even realize that kernel has faced some issues during booting unless you go and really check each and every line of dmessage and try to understand what the message says. So again, can we provide any value over here? Yes. We can automate stuff. We can check for these logs when kernel has booted. And how did I actually decide that it's a firmware issue? Same thing. Again, hack OVMF, enable ESRT, try to boot kernel on it and see how it actually parses. And finally found that it's a firmware issue and then was fixed. So saying that, I mean, there are four, like, you can briefly categorize that anything, anything, anything can go wrong when kernel is interacting with platform firmware. It can be as simple as just trying to parse the table or it can be as catastrophic as breaking a system or panicking the system. So what we do at Intel, how do we try to resolve issues of this? How do we try to find firmware implementation issues? I'll hand over to Megha. She'll take care of that presentation, I guess. Thanks, Sai. So from Sai stock, we can see that the correct implementation of UEFI is very important for system stability. From the previous examples, we've seen that things can go awfully wrong during interactions between the firmware and the Linux kernel. Also, buggy firmware hurts Linux and sometimes shows it in very poor light. There exists a clear gap in the testing coverage of the firmware operating system interfaces. Currently, there is no single tool that can test these interfaces right from the point that we press the power button all the way when Linux has booted and we are ready to use the system. So a lot of bugs that were discussed by Sai could have been found out at a much earlier stage if we had a tool which was prepackaged and would only test these firmware operating system interfaces. So with that, we introduce the love, that's the Linux UEFI validation project. So this project was aimed both at Linux developers as well as kernel engineers, firmware engineers, to reduce the development and the enabling time of Linux on UEFI-based systems. So why do we need a new validation, firmware validation tool? We already have a ton of those. So the first reason is to rectify the fragmented validation strategy. So if you see the current validation strategy, UEFI.org suggests to use the UEFI self-certification test suit, the platform initialization test suit, and the Ubuntu's FWTS or the firmware test suit to validate the firmware implementation. Although these tools are great to test specific areas of the system, but they lack the ability to test the system as a whole. So from this slide, we can see that the UEFI, SCT, and the platform initialization, they operate in a pre-boot environment. So operating in a pre-boot environment means that you're operating isolated from the operating system. Also, even if all the pre-boot test passes, it does not mean that the firmware operating system interfaces are flawless. Just testing the pre-boot scenario would not suffice in the real world where the firmware and operating systems interact closely. These interactions continue on even after the firmware and Linux has booted through the usage of the UEFI runtime variables. We have the FWTS and other test suits which operate in user space at runtime. So this misses testing the entire boot process. Thus, we can see that no single test suit run in isolation either in the pre-boot or the runtime could have caught all the issues as discussed by Sai earlier. So if this continues, kernel and firmware bugs evade detection and are shipped as is to customers and are discovered only by the end users. And at that time, the only solution is to fix these bugs by having some Linux work around because at that time, firmware code is basically immutable. So buggy firmware never really gets fixed and these firmware flaws keep propagating. Yeah, so this is bad for everybody. It's bad for the kernel developers who instead of focusing on developing new features for Linux are just debugging obscure bugs only to realize that they are indeed firmware bugs. It's bad for the end users because they are left with just rigged systems. And overall, it's bad for Linux because most of the time end users are using things that it's Linux that is at fault. So what do we need here? We need a validation strategy that can validate the contract between the firmware and the operating system. This needs to be done much earlier in the development cycle so that buggy firmware doesn't end up on end users systems. So what if we have, we already have so many firmware validation tools, but each of them are, so diverse and they only test a specific portion of the software stack. Well, so we have Love, the Linux UEFI validation project. So Love is a complete Linux distribution, so it's very easy to test the interoperability between UEFI and Linux. So did the kernel boot fine on UEFI or did it panic? Did the kernel initialize all the platform features correctly? What happens if the kernel tries to test this UEFI memory after boot? Does it hang? So Love is a unified framework that incorporates various open source test suits into a single prepackaged tool. It does not try to reinvent the wheel, but it tries to leverage the latest code from giant open source projects such as the Linux kernel, Yocto project and the open source test suits such as FWTS. So again, one may wonder why do we exactly need a new distro? We again have a ton of those, right? So as I mentioned, Love uses the latest kernel and most of the distros use kernel versions which are at least three or four versions old. So as such, we have access to the latest features and we can catch these firmware kernel bugs at a much earlier stage. Secondly, the purpose of Love is completely opposite than most standard distributions. Most standard distributions are targeted at end users so they really trim down the debug features and are tuned for performance. Whereas Love is a validation operating system, hence we enable all the debug features of the UEFI subsystem and performance is not really an important aspect for us. And lastly, Love is to be used very early in the development cycle where hardware and software are not at their stable best and hence can be buggy. We are, we might face crashes very frequently. Hence we want to have a crash handling mechanism of our own. So say we are booting Love on a platform and we see a panic. It's of no use if we don't know why the panic was caused or what exactly is going on. What we do is we dump the crash log, we reboot using KXX into a rescue mode kernel and then try to retrieve the crash log and then try to analyze the results. So in this slide we see how Love tries to bridge the gap in the existing firmware validation landscape. It tries to do this by bridging the gap of the existing test suits by providing validation at various levels of execution of the software that is the pre-boot boot and run time. We have bits that's the BIOS implementation test suit which runs in a pre-boot environment. This basically tests the interaction, the initialization of the hardware and the interaction of boot loader and the firmware. We have the kernel EFI warnings that runs during the boot time which looks for in the D message for warnings for potential UEFI firmware bugs. We have the UEFI VARFS which tests the various aspects of the EFI variables. We have FWTS which tests various aspects of the machine's firmware. We have Chipsec which is a platform security assessment framework and we have NDCTL which tests the non-volatile memory device subsystem of the Linux kernel. So these four test suits run during the run time phase. So as we can see love covers the entire spectrum of the software stack and we can see interactions between the firmware, boot loader, kernel as well as user space. So love can also be used to detect these firmware OS bugs early in the development cycle thus improving the quality of the ship product. For example this slide shows how love was used to detect and fix the illegal memory access bug in the FWTS. So this bug manifests itself at two places. Firstly during boot while running the kernel EFI warnings and secondly at run time while running FWTS. So if we were to use just plain old mainline kernel and we hit this issue where kernel tried to access unmapped EFI boot data region, we would simply hang. But with love we use the specialized hardened kernel and if we see such scenarios we just report this issue and exit gracefully. So what we are doing behind the scenes is that we have added a new EFI page fault handler which basically maps these the illegal memory accesses to the EFI boot data region. So as we have seen love binds together various test suits into a single cohesive easy to use product. So it's a handy bootable image that can directly flash onto the USB drive CD and you can just directly boot your hardware. You can also use this diskless or the Pixine or netboot to download the image from a centralized repo and you can deploy love on the large farm. So each of the systems in this farm can boot love and then report the results back to the server via network. So we can see that in this way love is also very available. Lastly you could also boot love on a virtual platform like QMU using OVMF that's the open source virtual machine firmware. So love is an open source project one can easily browse through the code, clone it, make changes contribute, build, run it for themselves. It's extremely simple for anybody to add a test case to love. So love is based on the Yocto project so if anybody wants to add a test case to write a Yocto recipe to download, build and install the test suite onto the love image and once love starts booting the love test manager would run the test, parse the results and provide a consistent, consolidated test report for users to view. So say that you have run love so how exactly can you see the results? So if you have booted through a USB stick you can see the saved results in the results folder. If you have used net boot you can specify an HTTP server to which you want the results to be sent back. If you are using a virtual machine you can see the saved results in the results partition of the love image. So to the right here you can see an example of the consolidated report. We categorize the results and they are color coded according to whether they have passed, failed or there are warnings that you have not typed. Furthermore you can see that we have around 160 unit tests which run as part of love. So there are so many test cases which can fail. So in this case you can see there are 117 failures. So not all failures are catastrophic so we also have a second level of classification based on the severity. And also if you want more information about the test suits you can use this arrow to get more information about what went wrong. So you don't really need to be a firmware expert or a Linux expert to use love. Anybody can just run love and email this test result or send it to our mailing list and hopefully one of us can take a look and analyze it and come back with some results. So the main intention of saying this is that it can not only be used by firmware engineers or Linux engineers. So since love can be deployed on a large farm it can be we have added the net console feature so that you can debug it using net console via a remote network. Also if one of the test suits panics while running love we have the telemetrics feature which can be enabled to send the report to the love server. A recent addition is that we have listed all the DSMs which are available in the platform. We have also added the device-specific methods. This gives the device developers an opportunity to understand the platform capabilities and evaluate their test cases. So in the near future we would like to add a new feature and add support for more boot loaders. Currently we just support Grub but we would like to add support for Linux as a boot loader also known as the EFI stub and the system deboot. So we would like to add support for Linux as a boot loader so that it can be implemented differently and interacts differently with the firmware. So ultimately what love is trying to do is that we are trying to find more bugs. We are trying to improve the test coverage of the firmware operating system interfaces and ultimately lead a path to a better UEFI firmware. So finally if any of you want to join our mailing list this project was started by Matt Fleming who was the maintainer of the kernel EFI sub-system a while back and my fellow colleague Ricardo Neri. So love also has a presence in ARM. We also support ARM. We have Nareesh Bhatt who works for Leonardo who takes care of the ARM aspect of things. We have this consolidated website where you can go and find more information about love, link to the source code, link to the downloadable images, individual test suits, blogs, etc. So with that I end our presentations and we can have some questions if you guys have any. Oh it wasn't audible in the previous talk so I just thought you would come up with a question. Hi I was just wondering a lot of people say that EFI is very bloated and that might lead to a lot of bugs. There are alternatives to EFI? That's right. There is a lot of criticism for EFI. I mean I cannot say that it's an overstatement but I do agree but I cannot comment more on that because yeah my experience doesn't span a lot on it but I guess there are a lot of other alternatives in many firmware support packages I guess even Intel has some other alternatives but as far as I see from Intel or from the industry I guess they are trying to push EFI at least. Bloated I think it's correct in a sense but I guess at least it's easier for developers. I mean it has modular design maybe you can maybe make your firmware lean I guess if that's possible I mean I'm not really a kernel engineer I just work on EFI subsystem of Linux so cannot actually comment on that sorry. Any other questions? Just a question on the community I mean how well adopted is love and who is maintaining it? So within Intel we have a lot of BIOS teams using it we have a lot of client teams using it so as I said this was started by Matt Fleming so he was the original maintainer and then it was Ricardo and now it's me so this has been going on I mean we started this project I think early in 2014 and we have some customers and Oracle as well who are using this and HP as well who are adopting love Any other questions? Okay then that's a wrap from us thank you