 So hello everybody, good morning and happy Pi Day for all of you who are celebrating that. This is the Salman diet, upstreaming device drivers as a form of optimization. And I'm Gilaad Ben-Yosef. So I thought I'd say a few words about who am I and what am I doing. I'm the maintainer of a device driver of the ARM trust on crypto cell piece of IP. I'll say a few words about that. I also dabble with general internal cryptography and security to give a taste of the kind of thing I do. I submitted a patch set to introduce the Chinese SM4 crypto algorithm to the kernel lately. I've been working on various form in and around the Linux kernel and other open source project for quite a while. Enough that you can see all the white hairs, co-offered building embedded Linux system, the second edition. And I've done a few other stuff that you can see here. So yesterday I sort of frolicked around the hotel and filed out to my amusement that we were actually on Salman Street, which kind of makes sense because as I've been told, Portland or Oregon people really like the Salman, which is logical. You can see here a picture taken at Williamet Falls, which I gather is not too far away, of Salman leaping upstream because this is what Salmans do. But it appears that leaping upstream is not unique just for Salmans. It also happened when the Linux kernel and the Linux kernel main list. Leaping upstream for device driver and device driver writers is when you have a piece of code, a device driver in my case, that is already written and you wish to submit it to the Linux kernel. But as these things often happen, said piece of code is not necessarily, at least in the beginning of this process, in the form that is acceptable to the Linux kernel community. And therefore there's a process of getting that fixed in order to be able to, that piece of code to be formally expected into the Linux kernel. And that process is called the staging tree. Basically the sort of the Linux kernel community accept your code on probation, on the condition that you work with the community to get that fixed. And when all the things that needs fixing in the eyes of the community are fixed, the code can mature into the main kernel tree. Now, as I've said before, I've actually have been working in and around Linux for quite some time. I've submitted patches, I did change it in the kernel, but I never went to this process until one year ago when I was hired by ARM to handle the ARM trust zone crypto cell device driver. The ARM trust zone crypto cell device or ARM trust zone crypto cell itself, it's a piece of hardware basically, a design for hardware or IP, as they call that, started its life in some other company called discretics originally, then Sansa, which ARM bought. And when ARM bought that company, it discovered that it does have a device driver, but it was out of tree and when ARM bought the company, it was decided to do the right thing and upstream the damn thing, which is where I came in, I got hired to do that. Now, before we go into the specifics and what's happened there and some of the interesting things that I learned along the way and I hope to share with you, I thought to say a few words about what the hell is ARM trust zone crypto cell. Because I'm not a marketing guy, I went to the marketing department and asked them to get some slide and explain what it is and what does it do. Unfortunately, they gave me this slide. Unfortunately, I thought I knew what that piece of IP does, but after reading the description that the marketing department gave me, I was not so sure anymore. So I decided that that is not helpful and write my own reader digest version. So basically, the ARM trust zone crypto cell, it's a hardware block which ARM designs and people implement that handle a lot of aspects of system security. So it provides cryptographic algorithm, but also a route of trust, security boot, security bug. Basically, it's a block of hardware that does security. It goes inside the system of the chip. It's not part of the core, but it is part of the system of the chip and it's a both trusted and untrusted world. For those of you who are familiar with ARM trust zone, there's some distinction there. That's basically all you need to know for the purpose of this presentation. So when I got hired, I took the driver of the thing and it should be noted that by this time, there was an existing device driver. It was working. It was being used in the field by numerous customers and on a lot of devices. So it was really not the case of something new and untested. And I started the work to get it upstream via the staging tree. So this is sort of the original commit that I sent. Actually, I think I'm lying. I think it's the second one I bought the first one. I sent it, but I didn't really know how to do it. I sent an email with a link to the Git repository because they figured it's a big driver. I'm not going to pull all 7,000 lines of it in the mailing list, but I was told this is not the way to do it. You have to break it down. So it's a process. It's an interesting process. The thing about this is that there's this device driver. It's a bunch of code. It is actually working. The people working on it have done a good job. In the sense that this is something critical that people depend on. But then again, it is open source because it was licensed under the GPL even before. But it's not necessarily that the people who wrote it thought about somebody else taking a look at it. And of course, when you go through the process of upstream, that is exactly what happens. I found this little meme that explains how that made me feel. You have all this stuff in the driver that people put in there. It's working, but it's maybe not very nice on the eyes. I started this process of upstreaming. A big part of the process is getting the feedback from the kernel community for the mentor. You're not using this API correctly and so on and so forth. It basically changes to places where things were not being done exactly as they should. That was actually a rather small part of the process, a very important one, but a small one. A huge part of my time has been spent with a personal enemy of mine called Checkpatch. Those of you not familiar with this creature, it's basically a script. You're supposed to run it on your patches or code and it lets you know where you screwed up. Sometimes big screw up, but usually things like, oh, you did not match the alignment of the parenthesis and so on, or you forgot some trailing white space. It's a really helpful tool. You make this change to the kernel and want it to be accepted and go through the standard. But in this particular mode of operation where you have this huge chunk of pre-existing code that I'm only starting to get familiar with and none of it is written according to the kernel calling style and you run it on, you run Checkpatch on the code, well, it's your own version of hell. You get this huge output of the bazillion lines of, oh, you missed a space there and stuff like that. What can you do? There's a reason why the kernel called style exists and like it or not, this is what I needed to do. So I start doing that. So side by side with the bigger issue and the advices that I got from the mentors and Checkpatch, I started working addressing the issues, learning a little bit about both the driver, which was new to me and the APIs in the kernel it was using. And little by little it got better in the sense that it adheres better to the Linux kernel coding style and the code looked better and so on. And this, I guess, is a natural process. But at some point in time, I began to notice a strange pattern. I was making these changes. And mind you, these changes were not adding features, not subtracting features, just changing this space versus that or using that API versus this and making the logical changes that grow from this. So while there was a general expectation that the code will get better, what I did not expect and I found surprising is that this pattern emerged that the line, the count of the line of code in a driver kept falling. So each commit set that I would send for review had more lines deleted than lines added. And in the beginning, you know, that made me rather pleased. You know, when you're deleting stuff, it's usually a good sign for called quality. But at some point, you know, I've noticed this is not something random. It keeps happening more and more and more. Sometimes there's like a huge drop and I change something. And the conclusion of this process from where I started until it ended was that I actually deleted 30% of the line count of the driver and it still kept doing the exact same thing. Now, this is good, right? This is a good thing, but it is also surprising, right? Because think about it what that means. That means that 30% of the code of the driver as it was previously written didn't do anything, okay? They were useless or actually they were worse than useless because when you have code that you can remove and the code still does the same thing, you just have room for more bugs, right? And this is a security critical part of the system. It handles stuff like local trust and encryption keys and so on. It's really not good to have auxiliary coding there. So this got me intrigued and I asked myself, well, what went on there? How was I able to cut this 30% of the code that did nothing? What was the reason we had 30% of the code that was useless? And what was it that process of streaming made me do and what can we learn from it? And as you can expect from these kind of things, there's no one cause, right? I've actually been able to identify, I think it's seven different causes or reason. And you can look at it both ways, either for adding useless code or for the upstreaming process to remove bad code. So basically this is what I'm going to talk about. I'm going to share with you what kind of patterns were revealed to me as I went through this upstreaming process and what I learned from it. Some of them are rather mundane and not surprising. Some of them, I don't know, maybe can be learned a little bit. We can learn something interesting from looking at it. So let's get started. So the first thing that was really obvious is something which I chose to dub reinventing the wheel. I know the font is way too small for you to see and it's fine because the detail here don't matter all that much. What you're seeing here is an original function, an SSI, buffer manager and copy scatter list portion, which in the end, after all the changes, continue to do the exact same thing and now looks like this. So the name changed, but also you can see it kind of shrunk. And you can ask yourself, well, what happened here? And the answer is, so the major thing that this function was doing was actually replicating a certain MPI that already existed in the kernel and when you express it was not doing it exactly the same way, but you can sort of massage it, you know, you can see change the parameter a little bit and compound something so that you can express the old function in the terms of the new function with a little wrapper. And that got that big function down into this one. And this in itself is not a surprising thing, but the question that this raises is, well, how can I identify this pattern, either when I'm writing the code or when I'm reviewing code for a streaming? And it turns out there's some really easy way, I think, to think about this. Ask yourself the following question. What does this code do? Obviously you need to do what it does, right? And ask yourself, is what I'm doing here, is the problem that I'm trying to solve here, is something which is unique to my specific case, to my specific howler, to my specific driver, or it's something which is common, that other driver in the same system, for example, have the same problem to deal with, the same issue. And if so, ask yourself, well, what are these guys doing, right? Because if they're doing something and it's the same problem, then maybe I can do the same. And this methodical process, when you read code or write code, whatever the case may be, of asking yourself, is this problem unique to me? And if it isn't, go find out what the other users or the other code which has the same issue is doing. And two things can happen. Most of the time you will find out that there is one or more set of API which you could just use and be done with it. And sometimes, and that happened to me as well, you will find that there's similar spirited code replicated in a lot of places, and then you, if you want to be a good kernel citizen, get the extra pleasure of writing that new API that expresses that and changes all these places to do that. And if you follow this process, it actually deletes quite a big chunk of your driver because your driver probably lives in sub-sub-system and probably that a lot of the issues that your driver is dealing with are the same across the whole subsystem. So this is the reinventing the wheel pattern. Don't do it. Another thing which had a great impact once I look at it is the whole issue of backwards compatibility. And there's a lot of jokes about backwards compatibility. You know, backwards compatibility is compatibility down backwards or this quote from some down metric in Microsoft. If you're backwards compatible, you're really backwards and so on. How does this come to play? So in my specific case, it meant that the driver that you remember live as an auto tree project on the side had a whole bunch of these FDFs, right? It was targeting a certain version of the kernel, but it had FDFs for the older version. So obviously, if you go and just delete those, things get simpler and smaller. But this does not stop here. This code is here because we needed backwards compatibility because, sadly, not all our customers are necessarily on the bleeding edge. So just deleting this is really not a good solution. So that leads to the question, how does one handle backwards compatibility with older kernel when you have a piece of hardware that you need to support across many kernel version? And it turns out this is actually tied to something else which seems like a different issue but is actually related to how you treat a different version of the same hardware. And I don't know if this is a general pattern, although I think it is. This is what we were doing, right? We had this pattern of, say, a certain version of the hardware. Here it's CCRE, Cryptocell or E712. So we had some version of the driver, 1.1 in this case. And it supported kernel 3.18 and 4.9 with these if-defs. And as development progresses, the driver got a new version and a new version. And at some point in time, maybe they added a new kernel version that was supported. And some point in time, when a new project was started in this, basically, hardware company, they sort of replicated the same logic that guided the hardware development. That is, just like the hardware design was replicated and started a new to form CCRE713, they did the same thing with the driver. They basically forked it. So version, say, 1.2 of the CCRE712 became version 1.0. It started out as the same code base of CCRE713 just with the small adjustment to support CCRE713. And maybe there was a change in kernel version and that continued onwards. And, of course, this does not stop here because you have more version of products and more version of the kernel you wish to support. And that gave rise to two things. A, those if-defs in the code. And B, that you now have several very similar but not the same, not compatible versions of the driver to support version of the hardware. This is the way things worked and I would maybe call it a hard liquor way of managing stuff because you need hard liquor to handle all this complexity. Just think about what happens if there's some issue discovered in one of the versions. You need to find out now if it affects all the others and make the necessary changes. So to get over this, what we ended up doing and actually still in the process of doing was changes on its head, okay? And move back to the future way of handling things. And in the back of the future way of handling things, we're doing things different. We have the upstream kernel version and, of course, over time we add more features like the bug and so on with the new kernel version. So 1.1, 1.2, 1.3 that goes in new and newer kernel version. And in them we add support for new product revision. So the same driver now support all the products incrementally and that means that when some bug is found we don't need to ask the question of, okay, which other product line does this affect? So this is very convenient, but it does leave the question, okay, but what about the backwards compatibility? What about backwards? So it turns out, and there's still proof at least internally in our organization, but I suspect it'll prove itself, that it's way easier to take the latest version in the latest kernel, in the upstream, and backport that to a known stable kernel than do it the other way around. And there is even a project with significant infrastructure, the backport project, that helps you do that. Okay, they have a lot of mechanics, if you will, a framework to do this. Now think about what that means, not necessarily in the eyes of an engineer but more as a product manager from the business side. It means that you always support all the versions of the same, of the products of the same driver for a customer that wants to switch to new revision of the hardware. It's very easy, it's using the same driver, so everything works as the same. A bug that is found in our version is automatically obviously fixed across all the version of the product. And if you're doing it right, if you're using infrastructure for the semi-automatic backporting, that means that when a customer comes and say, well, it's really great that you're on the bleeding edge 4.17, I'm really still on 4.9 or 4.14 or whatever, you have an automated process, you can practically click a button and get a version that suits him. Now, you still have to verify that, right? You still have to go through QA or whatever, but at least you took out from the equation the engineering effort to do the backport, right? If you put that into the machinery of your integration, that works really, really well. And this way of doing things allowed us to remove all the backport support, all these if-tifs that we saw, and it turns out that even after adding the support for all the previous version that we wanted the support of the product, we still got a significant drop of line counts. And if you want to think of it from a different perspective, if you look at the total line of code that we needed to support, across all the versions, that significantly reduced itself because before we have several versions of slightly similar, the same driver for a different version of the hardware, and now we have just one. And of course, most of it is exactly the same. So, backwards compatibility was another source of code that we found out that we can remove and actually get things better. Moving forward, there are a lot of places where the programmers were simply using the wrong API. And this is really a great example. So, the original driver had the CCFS interface to allow basically low-level debugging or picking into some registers to find out what's going on and tracing of events in the driver, which were really not of interest to almost any of the users. It was really for the development and debugging. And it was originally, as I said, developed with CCFS. Now, CCFS, as the documentation says, is the file system for exporting kernel object, whatever that means. And it provides a mean to export kernel data structure, the attribute, and the linkage between them to use a space. So, this is, you know, if you're an external developer, not necessarily in one that is in tune with the kernel community way of doing things, you read this and say, okay, so it makes sense. This looks like a good interface to use in order to push my debug levers or whatever. But actually, it's really the wrong one. This is not the one you want to use. For the kind of things that are described, basically debug tracing, there's actually debug FS, which, as the documentation says, exists. It's a simple way for kernel developers to make information available for user space, et cetera, et cetera. And the difference may seem, I don't know, semiotic, but in the end of the day, when we took the line count of implementing pretty much the same thing over CSSFS and then did it over debug FS, lo and behold, we cut down almost four times the line of code. Okay, because the person who designed debug FS, or this wrote the code for it, I'm not sure it was designed, was trying to do something very specific, right, to help debug, provide like a debug window into a driver or a piece of code. And therefore, they had infrastructure that exactly matched what we needed to do and we didn't need to write it. Now, I'm just, in the sake of honesty, the whole picture should be mentioned that some of the functionality we can just remove because it basically also replicated perf, okay, or ftrace, depending on what you want to know. Again, the big change was due to just using the right API. So, lesson number three, use the right API. Okay, the fact that you could do something with a certain API does not necessarily mean it's the right one. Sometimes there's several API that may fit and it's worth the time of the effort to ask the question which one of them is the best. Moving along, what I'd like to call duck tape engineering, which is, it goes like this, this is best described as an example. So, we had a device driver that supported some asynchronous hardware that worked with DMA and so forth to handle crypto operation. Now, it turns out that the Linux kernel actually have two sort of flavors of API to ask for cryptographic operation. The asynchronous one, which was very natural or native for us to support. And a synchronous one, which is actually meant for basically software that runs on the CPU, which may or may not use specialized instruction, but it's for stuff that is actually inherent to stuff that is in the core, that actually has access to the TLB and the MMU of the core. The thing is, there is a lot of security-oriented software on models in the kernel, such as the Enverity, for example, that was written to use the synchronous API. And we can go into a whole article of asking yourself why is that and that is actually a presentation by itself. Basically, the short answer is because the Asynchronic API and the most common way of using it was too complicated. It actually offered an upstream set of patches to fix that, but that is decided point. The point is that before that I came in, the way that our driver or the previous developer dealt with that was saying the following things. So there's a bunch of software in the kernel that actually uses cryptographic algorithm that we can accelerate, but it's using the synchronous API, not the asynchronous one. So obviously the right thing to do for us would be to support also the synchronous API, which may sound like it makes sense if you don't go into the details, but the reality is that if you try to take GMA using off-core piece of hardware and make it behave or expose itself as a totally synchronous API which was born or designed for basically a piece of software on the CPU to have access to the MMU, what you get is damn ugly, really, really ugly and unstable. And there is an obvious solution to the same problem. Which is much, much simpler and require much, much less code inside the driver. And that is go to the Inverity, ask the question, why is it using the synchronous API? And when you find out the answer is not a really good reason, just change it. This is the glory of open source. Don't try, and this is a big one, don't try to fix in your device driver stuff that is broken or can be optimized elsewhere. Because when you do, your device driver just expands ridiculously. Use the source, look, right? You have that access to just go and fix what needs to be fixed in the other side. And it turns out that the amount of code needed to actually fix the Inverity was very small. The amount of effort was very small and that allowed us to cut down a huge amount of code which was also buggy from the device driver itself. Okay, so avoid duct tape engineering, fix stuff the right way. Use the fact that Linux is an open source platform and you don't need to try to work around the problem. Just fix the problem, much easier. Next time they'm on the list, I'm not sure it's a general one, but it's worth mentioning. I called it micro-gymnastics. I'm not sure why. Actually, I have a clue. I think it came about because of this horrible idea called HAL or PAL, a hardware abstraction layer, a platform abstraction layer. There's this idea that if you write a device driver that maybe someday someone will want to use a different platform, the right way is to put a bunch of code that hides away, as if that is actually possible, the specific of the interface to the hardware on the software and then code above that. As you can tell, I'm not very fond of the idea. These abstraction tend to be very leaky. What usually ends up is that nobody's actually using the name driver on other operating system and if they do, they're using some forked-off version that is very different, but you still get stuck with all those mechanics of those hardware abstraction layer. They actually have a great effect on performance. Even worse than that, they have a huge effect on the design of your driver because when you're trying to code this to serve a couple of operating systems or platforms, you have to code to the law on the SCOM denominator. In our specific case, the actual Hal Pal concept was not really there anymore. The people who programmed this were smart enough to figure out it doesn't really work and remove it. The driver was only supported Linux, but yet we had this legacy of trying to base something on this no longer existing interface and that manifests itself by having stuff like just go and read a register looking like this above. It's this macro that has the name of the register and there was this really, I guess I'll be polite and say rich set of macros calling macros that wrap it and I'm sure there was a reason at some point in time for making all this, maybe not a good one, but it certainly no longer existed when I took over the driver. So I was able to turn all that crap into just this, just a simple static inline with a simple macro just to make it so I don't have to write a big define that was auto-generated anywhere and can just use the name. So again, it is worth, when you write code and when you review other people code, it is both looking into things and if you start to see too much macro wrappers it's a good idea to stop and ask yourself is this really clear? Is this really serving a purpose? Because this is not maintainable, but this is. And again, that cut a few more lines of code. That's a favorite of mine. That's actually quite surprising. Zombie code. You would think that there would not be too much of code inside a device driver that somebody maintains that nobody's actually using anymore and maybe never used. But as we've heard before in one of the keynotes what is true for the bigger Linux is certainly true for my small device driver. There was huge amount of code that was never used. Now part of this was never used at all ever because it was some structures which auto-generated from description of registers that we got from the hardware guys and so on but we never actually used. Some of it got to be unused when we moved from proprietary or our specific mechanism to general kernel mechanism such for stuff like tracing, for example, which made some of the code unneeded. And some of this, of that I'm just not sure. There was just some code that we removed and when we started to unravel what that code needed and start deleting all the code that was no longer needed because we made something work a little bit different we just began to see that we can delete a whole bunch of stuff. So you can see that it's worth a while if you had an ongoing project and certainly if you do an upstreaming work to go over the code and ask yourself is somebody actually using this and GitGrep is really a good friend in this endeavor because remember code which is not there that the structure which are not there cannot be used against you, right? They cannot hold bugs and this even have a new and frightening new meaning in the brave new world of Spectre if you think about that. Code that is never called can still be speculatively executed with Spectre variant 2. So it's really bad idea to leave out code there especially one that is not maintained because you know it's not getting called. Maybe one example of this is worth focusing into. Some of it was not actually code but as I said definition that got auto generated from some hardware register description files and we see that a lot I think in the kernel, right? There's this huge H file with description of the registers. Now I'm not saying we should just kill all of them and leave just the one that you're using but you know maybe it's worthwhile to ask the question does it really add something if we have this H file of 32 files on whatever register definition and the driver only uses 10? I don't know. It's a good question. I'm quite very much aware that these are auto generated in a lot of cases but when you want to debug something do you really need all that crap? Is it helpful? So in our case we left some of the definitions and remove others according to specific let's say hardware blocks or pattern of usage that made sense to us. In some cases we took the whole auto generated file we left it there. In some cases we said you know what we'll never be using any of the other registers and so on for a very specific purpose it's not relevant. If we will need them we'll add them later again. And that helped us drop even more code. The last one is kind of mundane. I mean it's the kind of thing you learn when you first start to program. Programming one on one don't repeat yourself. You have two functions they're basically doing the same thing but maybe with a small change so don't write the whole code again just write one single function and do a wrapper and so on. There were not a lot of this but there was some. This program weren't necessarily bad it's just that when you have a large enough code base and enough engineers working on it sometimes in different times they're not necessarily aware of all the things that are happening at the place of the driver. But one of the opportunities that presented itself when we looked in the whole driver and did this work of upstreaming was to locate these places where something slipped by and we had really common functionality that can be brought into a single function. And quite interesting some of it has been sort of hidden by the other changes, right? So the code was maybe more complex so we were using all this macro gymnastics so it's not obvious that it was actually doing the same thing but once we went through some of the simplification motion it then became clear that it actually is doing the same thing. So there is this acceleration effect a non-linear effect that when you tidy something up it sort of helps you see the other opportunities for simplification and optimization. So those were the things that I learned that actually made the upstream driver so much better and we got to a happy end, right? Really I think this morning I saw Greg Koh-Hutman email hacking this change that removed the staging copy because the crypto tree accepted formally the device driver and it took something like a year. It's a question of resource investment really but I think it's really worth asking what did we get from this? So I deleted a bunch of lines of code but really that driver got better and I don't mean better in the sense that okay it performs now one nanosecond faster some obscure AES operation no not that kind of better I'm talking about better of being higher quality so it's faster time to market to make a new revision for a new hardware revision it's more secure because I have less code and it's much easier for me to go over it and make sure I didn't do a mistake and for others to do the same it's higher quality code that our customer that use these things in critical parts of the system this stuff is what makes sure it's a security critical component so it's really important in the business top line sense got better and we got that benefit by doing upstreaming this is something important to remember okay yes we're starting doing this because it's the right thing and this is standard operating procedure if possible in how to do the right thing in this regard not always easy or possible but when it is possible and we all knew it's the right thing and we'll make the code better but we really got the attention of a huge community some of them world experts working with us helping this code be better that's completely untrivial and I think it's also a good opportunity to say thank you to these guys not all of them are here I took this mostly out of just the git log of the changes that went into this driver now think about it for a second it's an obscure piece of hardware I mean it's been used on some bazillion devices but most people don't have access to it or don't think about it although some of you may even run this on their phones they are not aware but we got all these people to contribute town and effort some of them Greg Cole Hartman, David Miller Herbert Sue and so on to give us critical input that made this driver better in the business sense and it didn't take that long it's one engineer me working on this phone one year taking into account that this is not the only thing I was working on and this was new code to me so it wasn't that difficult the benefit was huge and I really owe a debt of gratitude to all these guys so thank you if you're seeing the video and there's basically things before I will let you ask questions should you have them one of them is a tradition I did not start but I'm happy to continue and that is to take a speaker survey with all of you so if one of you don't want to get in the picture this is a good time to duck and one last thing before questions is as it says common upstream on it's really really worth the effort you learn a lot the code gets better I know it's hard to convince sometimes management so hopefully show them this presentation I'm sure the smart people will understand questions wow I was really that clear excellent alright then yes well it's a good question it certainly felt that way a couple of times but the question was were there a situation where I felt the community led me astray or did not work with me so it's a good question I think it felt that way a couple of times I'll give a concrete example which I think will help explain my answer better just in the beginning I have this huge 7000 lines of code which I wanted to submit for review and I was kind of scratching my head how to do it of course you can make a patch like a big 17000 lines patch and submit that that didn't seem sending that in the main list so I tried to put it on a git repository and send it in the link and we still know you need to break it down into commitable separate commitable atomic parts of the driver so we can review that so on one hand the logic of that made total sense they want to review it they can look at this thing on the other hand this is a huge driver that already exists how do you tear it apart right I did not know how to do it at first it was kind of frustrating but there's one thing that really kept in the back of my mind for all this process and that is that I'm really going to a bunch of world experts or just random people and basically asking them to invest time in helping me so if they ask me to do something I need to do this even if it does not seem reasonable and the point and when I started thinking about this I realized that what they're asking were not that difficult to do once you you know relieved yourself of the option of not doing this let's say and and so I did that and I actually got something in return so and that is that when I you know went into the mechanics of understanding how to actually slice the driver into pieces I learned something about the driver was not aware of before how it's structured what's the different pieces that can function together what are not the interdependencies and I think that this is a good example because what it says is that what's important is not necessarily any specific advice per se it's the process so even if I at a certain point got maybe less than actionable advices or something that you know what it's not that important I think the whole process of somebody asking you it's like a huge exercising peer programming with some of the world you know best minds was useful in itself so even if somebody told me to do something which ended up being wrong understanding why it's wrong and being able to explain it to somebody which is a complete stranger and is no idea what my hardware is was beneficial any more questions yes so the question is did we we had problems getting customer to use the backboards project so my answer has two parts first this is something which is still ongoing but our plan is not to let the customer use the backboard project but for us we use it as infrastructure we commit to the said customers on specific version that we support we use the backboard mechanism in order to enable us to easily make the backboard and deliver it to them so there's no expectation for the customer to do it of course through the customer want to use it to backboard to some version which is not blessed by the business entity they are welcome to do it but that's up to them any more questions ok well thank you very much it's been a pleasure I hope you enjoyed it too thank you