 So, hi everybody, I'm Luca Ciarizoli and I'm an embedded Linux engineer, so my job is to put Linux on embedded devices, on devices with a custom electronics and I do all the current applications and so on. I also give a few contributions to the build-up project and a few other ones. So, I will talk today about some experiences I've had with system on chips, which have bad support. There are some with good support, some with bad support, so I will talk about one that I met in the other category. Basically, at least for this talk, I will refer to an embedded system as a physical product that has an electronic board that is designed specifically for that product, a custom board with a system on chip at its heart. So, does anybody don't understand what the system on chip is or anybody has said so far? Okay, nobody? So, you're in the right room? Cool. Okay, the chip that I will be talking about is this one. It's from a Chinese brand called MoveUpon and it's basically designed to allow producing very low-cost devices. Its core is an R9 clock in 240 mega, so it's pretty slow compared to other system on chip that are very common nowadays, but it has a quite reasonable amount of peripherals, many actually, especially an 8264 encoder and decoder and many others. It has a pretty peculiar design choice, so it has 64 megabytes of DDR2 RAM in package, so it means the package you see on the board contains actually two silicon dies. One is the system on chip itself and the other is the RAM, which are bundled internally, so this means you don't have access to RAM directly outside and also the package is LQFP. This whole set of things allows to have very cheap products. If you don't understand why it's electronic stuff and it's not the goal of this talk, I will be talking about software support. So, a system on chip needs some software support because every system on chip is different from another. It's not an X86 motherboard, which is standardized, so you need what is usually called a board support package, BSP or software development kit, SDK and the kind of idea or BSP that you really would love to receive for any system on chip and that happens with some is I would like to have a mainline kernel, so kernel.org and get the kernel, a mainline U-boot or barebox to get the upstream version from that with support for at least the peripherals that can be involved in booting and good, well-written hardware documentation because at some point you need to understand hardware. The reason I would like to receive this is I have a kernel that has a very good quality, it's peer tested, it's peer reviewed, it's many testing infrastructure, so the code quality is very good and especially you get support basically for any amount of time you want for the future, so it's well supported and you have bug fixes and so on. And also using this component, especially U-boot, means you can reuse the booting and freeware blade mechanism from other devices you've already made, so you reuse the same components, it's less work to make a new product, of course. Unfortunately, this is not what I received, so this is the start of my quest and the main steps that I will talk about you are documentation, the Xker, the tool chain, booting, the tools and customer support. So documentation, as I said, I really would like to have good documentation. On the official website you can find an eight-page data sheet which is basically a list of features, a very detailed list of features and that's it, so you don't get anything else on the public website. But if you're a customer, it's very defined, you receive a lot of stuff, but it's so under NDA, so I cannot tell anything about that, but luckily there is a third way to get some material without being a customer with an NDA, that is you can buy a quite affordable evaluation kit, a development kit which is available from online Chinese stores. It's a pretty nice board, so you can get not only the hardware, but also a DVD along with basically a subset of the material that is given to customers, especially there is this manual, the design guide, which is basically the documentation for each peripheral that's inside the system on chip. So with that you have a list of registers, a list of fields inside register with at least a name for each of them. For most of them you also have a description of the device and a description of the fields. So this is the very basic thing you can work on when you need to implement or tweak or adapt a device variable. The quality of documentation is well, it's good enough if you're used to reading this kind of document and you have an idea of how system on chips work generally. So you already kind of know where the pieces are and you can fill in what is not completely documented or properly documented. So it's okay, it's enough. Okay, so coming to software, the first big piece is the Linux kernel, of course, which has to support your peripherals. So basically it has to have device drivers for each of your peripherals. So the Linux kernel that is provided is actually derived from Linux kernel 26.35.4, which has been released in 2010. It's very old, at least compared to the evolution of software, it's a lot of time. 26.35.4 is not even the latest release in the stable branch for 26.35. So basically you, in the difference between .4 and .14, you are missing 11 months of bug fixes. It's more than 1,000 commits and so just in case one of these bugs will hit you, it's better to merge the changes from the .14 release, which luckily is very easy because it has minimal conflicts that you can solve. Unfortunately, the difference with the mainline kernel is huge, it's uncountable. So there's a pile of new features, pile of improvements, not to mention bug fixes, especially security bugs. There are security, there are bugs and security bugs in every kernel, but if you upgrade it, you can stay on the safe side. If you don't, well, every old kernel has bugs and they are exploding in a while. So you are at risk, at least if your device is connected to the internet. Also you don't have device 3, so this means you have the bold file mess, which is okay, it works, but it's a bit annoying to work with. The additions to the kernel are provided by the vendor as a set of touch files, so there are huge files, there is no atomic commit or something like that. The first one is 3.6 megabytes, so it's all the stuff, you just pull them in and hope it works. You cannot see exactly why this thing has been done and the reason, the details and so on. So the amount of changes is 170,000 lines, which perhaps it is reasonable amount for a new system on chip, but this kernel has several issues besides how it's provided. Some serious issues are mainly in three categories, bugs, missing features and code quality. So here is just a few examples of the bugs that I encountered. One is when you try to set up audio, so the first thing I did when I went to test the microphone, the audio capture, is outside a code my file.wav and they got a color crash. That's because when you don't specify a channel, on the default channel there is a null point in the reference, so that was very quick to fix, but it's actually annoying. The situation was a bit worse with the 8.264 codec. The decoder, for example, it works with sample stream, so the sample file that has been perfectly encoded. But if there are errors in the file, we are more likely, if you're streaming from the network, it's absolutely normal that you have packet loss. So if there are packet loss, the stream is not perfect and in that case you have lots of crashes because of null point in the references. So either you check everything in user space before feeding it to the driver or you fix it into the driver to avoid the null point of the references. Okay, next kind of annoyance in the curler is it's not complete. So, for example, GPIO, it's a very basic feature of any system on chip, so they are implemented with basic functionality, but you don't have interrupt support, for example. So if you want your software to do something when a button is pressed, you have either to pull it continuously or to implement interrupt on your own. Power management is another issue. Actually, it's implemented in two different ways, one with the proprietary API and one with the Linux standard API, but none of them really work, especially with the standard API, it's incomplete and it doesn't really work. So if you need power management, you have to fill in the gaps and make it working properly. Let's have a look at code quality. Code quality for a package, theoretically, might be something that you don't care about if you just download it, build it, use it, and it works perfectly. So theoretically, you don't care about what it's inside, but in the practice, you need it in this case because you have missing feature, you have back, so you have to look inside the code and if the code is well written, well, you can quite easily make your way through the code and understand what it does and find back, but in this case, I'm afraid it is not what happens. The code quality is generally very bad. Well, in the previous matrix, there are more than 500 lines added that start with hash if zero, so this is probably not the best way to remove code that doesn't work or something, or test work. But let's view a couple examples which are more specific. One is about the driver model. Since 2.6, Linux has a very rich and effective device driver model, so you can basically mix and match every peripheral with each other as far as the hardware allows it to do it. So it's very modular. But unfortunately, this is not what happens in some of these drivers. For example, this is the frame buffer driver. The frame buffer driver has this code snippet, so basically what it does is, if in Kconfig you enable this display, then include this .c file and if you enable this other display, then include this other .c file and so on for several other display models. So this is because basically there is no display driver. It's the frame buffer that does also the display driver and so some of the frame buffer functions are actually implemented in the other .c file which differentiates from one display to another. This means, for example, you cannot reuse a standard display driver with this frame buffer, so it's not modular. It somewhat looks like some framework design that doesn't have the need to have a flexible device driver model. Another example of code quality is with the H264 codec memory allocation. So as it handles video, it's quite normal that the H264 codec needs to have pretty large amounts of memory. It needs some contiguous. So actually it's not trivial maybe to allocate contiguous amounts of memory, but in this case it is done this way. So in the H264 codec code, there is this function which is getAVCBufferSize which results the number of byte that it needs for the continuous buffer. You just ask it and it says 2.5 megabytes. This is not a constant. This is not a defined variable. It's uppercase, but it's just a variable. It's not a constant. Okay, not code is non-compliant. So this function gets exported and it's called by the MNU code. So in the memory management, there is this reserveNode0 function which is called during pretty early kernel setup. So what it does is code getAVCBufferSize which is this one to know how much it needs to allocate. So this means that if we applied this model to all the kernel, MNU.C must know about every single driver that needs more than a few kilobytes of memory and so it's definitely not scalable. And also it has a very practical implication. Since this code here is in early boot, it cannot be built as a module. So this means the whole driver cannot be built as a module because otherwise this function would not be available at boot time. So the whole driver must be in kernel and if you have a bug in the driver and as I said we have, you have to upgrade the whole kernel to upgrade the driver. So module compilation is not possible anymore. All right. So this was an overview of the kernel code quality and this is how I handle it. Approximately I do this thing. So in my Git repository, it started from .26.35.14. So I don't care about .4. I wanted the bug fixes. I apply the patches fixing the few conflicts on a branch where I have vendor drops. So this is the pristine copy I get from the vendor but it's Git version so I can do Git deep and Git wrap and whatever. Then I have a second branch where I apply fixes to the kernel and I merge in the new patches as soon as they are available. And then I have a third branch where I do my development where I merge in stuff from the fixes branches. So at any time I can, let's say, have the pristine copy from the vendor, something, the vendor kernel that builds and my own kernel. This is one possibility to handle it. Okay. Let's move to the next step which is the tool chain. So of course you need a tool chain but it's less obvious why the vendor provides a tool chain. So they do. They provide a tool chain in the BSP and it's based on very old components just like the kernel. So basically GCC and UCLib C from 2007. So there is no C++ level support. They are not at the latest bug fix release and they also supply a few other libraries and they just pick it with some criteria. So, but this tool chain is so old that you basically cannot build several modern software. So basically your design, your algorithm to pick a tool chain, at least the one that they follow, is first don't use the provider tool chain. It's too old. It's not usable. So okay, you could use a pre-built tool chain as many people do with the med systems so it's easy. You don't have to build it. You don't have build time. It's easy to use. It's ready. Okay, but since you have kernel 2635, you need a tool chain that has been built with kernel no later than that because otherwise you might have something not working. So probably the tool chain is itself quite old. Actually, in some cases, I do use pre-built tool chains with more modern kernel headers. To some extent they work. So it's a risk, but you might want to fight. Probably the best idea is to build your own tool chain. It's not even that difficult if you use the proper tools at least. So there are many. There is cross-tool-NG, which is very powerful and flexible. But if you want something more simple, you have build system probably does one. Build root on embedded and others can build a tool chain. So you can build one with kernel headers 2635 and quite modern GCC. Maybe a very modern GCC and a very modern C library will not really work very well. But if you stick to like GCC48 or similar, it's probably going to work well. That's okay. So next big topic is booting. Booting in embedded system is very crucial, very critical because it's different from one machine to another because of hardware restrictions or software needs. Software upgrades need to somewhat interact with bootings. It's a very relevant method. It's very important to get it right. And so in the BSP, there is no bootloader, at least no standard open source boot orders. No U-boot, no Bivox, nothing that you probably know about. There are some proprietary bootloaders. You have the sources, but they are not open source and they are tailored to the specific machine. Okay, so reusing components is not an option in case you have an existing boot or then you want to reuse the same scheme. Well, at least it's not straightforward. So let's talk about booting in detail. Above you can see a quite standard booting process. There is no standard booting process, but this one that is among the normal ones, let's say, I'll take as an example, non-flash. So the hardware itself, the system chip, as most system chips do, has a boot ROM inside and so it will execute that code internally and that is something you cannot change. And what it does is load a small firmware into internal RAM. It also is small because RAM has not been initialized yet. The 64 megabytes of RAM are not initialized because it's on package, but it's something that the system on chip die does not know about. So it's not initialized yet. So this small piece of code is small, so it's usually non-interactive. It does only one thing that is initialized, the main RAM, so you have plenty of space, and then load the main boot loader from the memory itself. So Uboot is the most widely used and it's loaded into DDR and then RAM. So Uboot, according to its environment variable and boot scripts, mounts the, in this example, mounts the UBIFS root file system, loads a kernel from there into RAM and boots it. And then the kernel will use the root file system itself. So this is one of the standard possibilities. The one you can do that is done with the standard, with the boot loaders provided with this chip is actually this one below. So the first part looks pretty similar, and it is. So you have a first loader, this is called NAND loader, it's just an SPL, so it initializes external memory and fetches something from the flash into there. The second piece that gets loaded is called NVT loader, but it's very different from Uboot or anything else. So what it does is the normal booting or follow the arrow going down. It mounts a FAT partition on NAND flash and it loads from there a file which is called ctrl.bin, but it's actually a Linux init with a dependent init from the first. That's it. It loads that and boots it. And in the init from the first, the init script does look in the FAT partition, mount the FAT partition and look therefore a shell script to start. That shell script can of course access FAT and anything else, other files and do anything. But there is another possibility. If you press a button during boot, then the NVT loader will follow this other path and just expose the FAT partition as USB storage devices. So this means you can connect it to a PC and look the files in the FAT partition. This booting scheme has an interesting advantage. So it allows to very easily deploy demos. So the vendor provides some demos. They are a ctrl.bin file, a script and maybe something else. So you just press the button during boot, mount my storage on PC, copy the files that have been provided by the vendor as a demo and there you are. Reboot on the demo runs. So it's very easy to test. There is, for example, a demo to test the 8.264 video playing, another for testing audio, for testing Ethernet and so on. It's very easy to deploy them also for hardware designers which maybe don't use Linux, they use Windows, so they can deploy them very easily. But it also has disadvantages. One is it uses FAT. FAT is absolutely unreliable, especially on power loss and it cannot contain a root file system at all. It doesn't have permission, users, siblings and so on. So it's not an option for your root file system. Also FAT is on top of NAND which is unreliable and there is a flesh transition layer that allows to have a somewhat reliable virtual partition to put FAT on. This works but it's a binary module. So if there's a bug in there, you can wait for when not to fix it or otherwise you're on your own. And also this scheme does no provision for redundancy at all. So if any of this component is corrupted or maybe during upgrade you break it, the device will not boot anymore. So you have to connect it to a PC and refresh it but if you're on the field in the world, it doesn't work. Other issues are more related to Linux stuff. One is the root file system in this scheme is an UniformFS. UniformFS is cool, it's fantastic but it's in RAM. So you can change the files but the changes are volatile unless you save them somewhere else. It also has a limited size and everything it contains is still in RAM. Even everything, even things that you don't use sits around for the whole existence of the UniformFS. You could still persistent changes in the FAT partition but of course you run into the issues that FAT has. Also in this component, nobody passes a command line to the kernel. So if you want to change how the kernel boots any parameter you cannot do that. For example, this is an issue during development if you want to mount your root file system over an FS you have to change the kernel command line and you cannot do that because you cannot... Well, you can but you have to reveal your kernel refresh your kernel and then boot and when you want to switch to non-fresh booting you have to do it again. So it's very uncomfortable. Finally, these lawyers cannot load a kernel via TFTP which is another very handy thing to do when you are doing development. If you're developing a kernel you want to test a feature that is not in a module then you could just load it via TFTP and test it very, very quickly but it's something that you cannot do in this case. Okay, so I start looking for alternative booting options that solve at least most of these issues and so here are the steps that the step-by-step guide. So first step is keep the system as it is but add a SquashFS root file system. So you have a file system that can be as large as you want but reasonably large as you want and it's efficient, it doesn't keep everything around. Okay, so it's a little step ahead but it has drawbacks. It's read-only and also it's not yet possible to do atomic upgrades on that because it's on fat and so on. So it's not really a good improvement. Next step. Okay, in the Linux world and once we have a Linux kernel running we have plenty of options. We have UBI and UBIFS which are very good for non-flash. They are efficient, scalable, quite scalable and they work very well. They handle bad sectors and it's a proper Linux file system. So one idea is this one. You just restrict the fact partition to be as small as enough to contain the kernel and then you tweak the E-NITVANFS to not mount not to look for shell script but to mount your UBIFS root file system. So in that case you have everything that is most of the things that are relevant for your product inside these where they are safe and you can do atomic upgrades and so on On the other side you lose USB storage upgradability so at least you can still use USB storage to upgrade but only the kernel and the E-NITVANFS for the UBI stuff you have to do it with other means. So basically what we did is the FATR area is now atrophied to the minimum size and NVT loader does not do its specific feature which is upgrading via USB cable for demos. So we lose most of the advantages of this scheme so let's get rid of this scheme at all as far as possible. So remove the FATR partition, remove NVT loader. It is possible because NAND loader can just load a Linux image file not as an image but an image file at address 0 and boot from it so you can have your kernel Linux image processing from FS directly in NAND and it will be loaded and it will mount your UBI FS so you have less components than it so less bugs, less boot time and so on. Okay, so the flip side is your kernel is on bare NAND so it's not in a safe place and so you need to at least provide space for two copies of the kernel in case one of them fails but then the loader does not directly handle redundancy so you could tweak that or you could use some different scheme maybe using KXX to load another kernel from a safe area, a new BI and so on so there are different variations, several variations you can do but you can elaborate on that but actually the final step would be to port your boot to this system chip this would be probably the best because in this case well you can keep the SPL it's quite okay except it's not redundant but in this scheme you basically have all the possibilities that you know in other embedded systems so if you have other boards using your boot you can just use the same scheme do the same things and save a lot of time while having a lot more features unfortunately if you ask your boss to do this activity, to plan this activity you have to keep it to account it will affect time to market and unless you're used to this kind of activity it's very difficult to understand how much time it will take okay so I'm afraid I didn't do this so I don't have you boot for this chip to present okay next topic is about tools actually in the ideal BSP that I presented at the beginning there are no tools if you will notice that's because it would be great if you could just work with standard tools open source stuff without having to write on anything that is vendor specific sorry but actually in several system chip vendors tend to have a boot realm that speaks a proprietary protocol so you need a specific tool to interact with the chip when there is no firmware, no software nothing on your flash so at least for development and for production probably you need to interact with the chip without anything, any piece of software stored in the memories to store the first piece of software there and it's also useful for development so basically you need some tools and so what you get is this one which is you get a tool that allows basically to connect via USB to the system on chip interacts with the boot realm and it can flash several types of memory and it can also put your stuff in the realm and execute directly so it's pretty flexible and USB is very handy so it's quite good at least the design is quite good the software itself is, well it works but it's for Windows only so you need a Windows machine in addition to your Linux machine just for this tool and it's completely proprietary there are no software sources so you cannot modify extended ported Linux or anything and it's a GUI software so you cannot put it in your script to automate any task that has to do further steps finally the protocol between the boot realm and the PC is not documented so this doesn't allow to write your own tool basically you lock to this tool unless you want to reverse engineer the protocol or the tool itself another peculiarity of this tool is what it does is it maintains a sort of partition table in NAND flash it's unusual because there are partition tables on hard drives and similar so SD cards and USB drives but in NAND flash there is no standard for having partition tables so it might be a good idea because it actually allows to organize data and it divides the script itself but ok this is done but without using any standard so it's a proprietary scheme that is quite terrible to how the other tools work so basically since V2 enforces this partition scheme you cannot get rid of the scheme itself it doesn't hurt very much but you just have to know it so basically this is where it stores close to the NAND loader the list of where you can find the next firmware and at which address in memory you have to load it so another topic here is customer support so if you had a perfect SOC with a perfect BSP you don't need customer support but I don't think this is realistic even in the best cases so you need some customer support at some times and generally speaking with system and chips and banners in general there are several issues that can happen with customer support that make it more or less effective but what matters is if you have standard mainline code you don't have to stick to the vendor you can have support from the community from the political vendors, commercial support providers and so on so you can have many possibilities while if you have such as in this case you have the sources but it's not in the public and it's a very old kernel almost nobody will support you except the vendor yourself and if you don't even have the sources such as for the tools or for the hardware itself it's only the vendor so you have to rely on them and something that can go wrong in customer support is well the engineer in the vendor company who knows the answer is not directly reachable there are salesmen, customer support department and so on so it might be difficult to reach the person who knows the answer also responsiveness is an issue some are responsive, some are not and you might have times of issues if you are in Europe and customer support is in the USA or in China you will have a one day delay at each iteration so actually in my case for this chip customer support is quite responsive so you get the reply the next day but sometimes they solve the issues sometimes they are not so this is a good example so I wrote an email saying the proprietary tool doesn't work on my PC and the reply was it works on mine, see the screenshot okay so next day, next email okay on mine it doesn't work can I have some logs from the software so I can send them to you I can debug it and the reply was I'm sorry adding logging would not be practical so I don't know exactly what it means but basically I had to solve the issue on my own in other cases they solve the issue so it's like a 50-50 situation okay so let's draw some conclusions the result and the final result on the product to be developed is the product that works but the overall quality is less than it would have been with another system on chip with good support so it has issues that shouldn't be there also it doesn't exploit completely well the hardware and so the hardware would allow to do much better but the software is an obstacle we spent a lot of time as you might have imagined in supporting ourselves, fixing bugs filling in the gaps and the booting mechanism as I showed you so a lot of time spent is a cost and it has an impact on time to market so it's bad for everybody and so I started wondering what could be done to improve the situation and okay what can I do to improve the situation as an embedded Linux engineer what I can do and I should do is try to assess any potential problem any obstacle as early as possible so especially with respect to booting and to hardware support the drivers and so on test them as soon as possible try to understand any possible issue as soon as possible so if it's soon enough you can influence the choice of one component or another also as a hobbyist or as hacker there is something you can do like if you want to buy a board, buy one with good support but I'm afraid as a single hobbyist it will not move the market with your buy choice unless you really build something that is so viral that thousands of people will want to do it okay congratulations in that case so another thing you could do and that would be very much appreciated is roll your sleeves and start improving support and may line it so it happened with other system and chips like the old firmware for example and the result is great but I'm afraid this is not work for a couple of days absolutely so who is able to really change the thing is of course the vendor because it's their choice to provide a good or bad BSP so this is a basic equation good BSP means happy engineer so the happy engineer will make good products and good products will sell more this is somewhat of generic rule but somewhat applies but it is generally not very well perceived at some high level decision points so let's go to some more practical details so first thing, don't reinvent the wheel it will take less time to port your boot to your device than to implement your home boot loader and when you've done with that you will get a lot more features so after it's there and if you align it people will improve it and basically it's for free for you people will fix your bugs in the kernel will add features without even you to have to move one finger so try to push code to my line it's definitely the best thing for long term software support for your device so if you do that, well it's expensive it takes time and it takes some skilled people but in the long term it's rewarding the products that are built on your chip will be better and finally leverage the community so it's up to you already doing that but also software support let your engineers reply to questions on public mailing lists on IRC, on public channels so everybody benefits if I look for questions it's probably already been answered somewhere else I don't need to wait for the next day and other people can improve the answer and your engineers will not have to reply to the same question to ten different customers and also making cheap boards that are hacker, maker friendly like you know Raspberry Pi and Beaglebone Black and so on they sell a lot of chips and they allow a system on chip to be easier to use for real products that are based on custom hardware I'm afraid in this case it will be pretty hard because the chip is really slow compared to any hacker friendly board that you can find around so it will be very difficult but if you have even a little more CPU power or some specific feature that other chips do not have this could really be a way to allow people to start hacking on your software and contributing to it making it a little better so ok, I hope this will happen to some vendors at some point it happened in the past to some vendors so that is hope ok, that's all yeah, question? ok, I think the man was first do you have an opinion which vendor is best if you look at the previous slide? so the question is do you have an opinion about which vendor is best? well, I don't want to advertise and I definitely don't have a complete view of the market but basically just go to kernel.org, download the latest kernel and check which chips are supported I was thinking more about the booting process about the booting process or the U-boot sources and check that yeah, but anyway for those chips I have seen in the past most vendors provide U-boot maybe it's an old U-boot, it's not in mainline but it's still U-boot, so it's still a standard component so a five years old U-boot already does a lot of things and so it's better than nothing my question would be why would you choose such a chip with such crappy software support it's not 2007 anymore, you didn't have any choice yeah, well the question is why did you choose this one and another one because software is not the only component there is also hardware and cost ok, so yeah, it was cheap, yeah any other question, yeah so the question is, it's a problem for the vendors to yeah, it's a problem for the mainline kernel and stuff and everything and then you don't need to do many things ok, I agree with you it's a problem for vendors mainlining their software support actually it is not mandatory of course so they can choose and it takes time to do it it takes money because people have to work on that it takes more time than developing bad software good software is more expensive than bad software so they can just send the saleman and says there is Linux so it's real, there is Linux but it's a little bit more different to understand the code quality so yeah, for some vendors to take the responsibility of doing mainlining and the advantages some others they don't do the investment that's what happens, sorry why there are vendors that are not supporting mainline well, I think they are a bit short-sighted but ok you should ask them, I'm sorry yeah, ok, so the question is did you find any relationship between the quality of software and the quality of hardware? I'm not a hardware designer but I work with the hardware designer so the hardware generally works quite well we have a couple of problems but overall nothing really big so no, the hardware does not show big problems so it's very unrelated of course the people who do the hardware and the people who do the software are very different and maybe also their bosses so the approach is different I'm sorry, I cannot hear you have you told your marketing in the chip manufacturer all the stuff you presented here in your talk? so if I understand correctly, is there some marketing material? do you guys report the chip or the stuff you presented here what you don't like? ok, if I told them the issues I had not much really I don't have access to marketing and high level people I have access to engineers who provide support but no, I didn't I might send a link to the slides so this is sort of related if you buy a Windows laptop it generally has this good for Windows version X sticker on it could some sort of marketing effort guaranteed main line Linux and some logo so that starts to move the market and you can point that to a marketing guy to say well, your competitor has guaranteed main line Linux to you do you think that could make a difference? well that's very interesting so the question is do you think that something could change if we had some sort of logo like a Linux mainline compliant like the works with Windows stuff well probably it could improve situations so if there is something like standardized by maybe Linux foundation or something so you can put a logo saying this is a mainline so good quality or there are the sources so not so good quality yes it might be good and cannot say if it would move by decision but it would be it could be a good something point okay one more question one more question okay so thank you