 Okay, welcome everyone my name is Mikko Rappeli and this is my colleague Mario Gulark We are from BMW car IT and we are here to present Talk about continuous integration and testing of an Yachtov project-based automotive head unit First of all, so few words about BMW car IT. It was found in 2001. It's a small Small subsidiary of BMW AG and we are basically a software house inside BMW. We do products for the BMW products as well as some research And also opens our software product there Of course BMW makes cars and in those cars for example This is an i8 hybrid engine car with carbon fiber chassis and whatnot and inside the car in the cockpit next to the driving Driving wheel is the head unit with a big display and here on the bottom Left you can see the display of the head unit showing the maps and navigation data for the driver Usually the driver uses a simple Round joystick and a bunch of buttons to control this display And of course this head unit is connected to the other passes in the car and uses various data from there to display and convenience features our product setup Basically to develop a head unit for BMW cars connected multimedia computer with navigation and telephony and other features Several companies are involved physically distributed in Germany and other countries. We have hundreds of developers at various levels of the software stack Due to these multiple companies also really complex it and see our infrastructure Which means we have a few technical and partially also political obstacles in when setting up technical solutions for various things Requirements for our our CI system basically to provide fast feedback for developers integrators and political organization Basically multi-stage CI is the implementation where the first phase is a software component change verification inside an SDK environment Basically, we built the software components in the SDK and execute all the unit tests that are available In the software integration for the whole system We then do CI builds for the full system for all targets for all images and then have some quality assurance checks around that and then do actual on on target Testing with the provided images and then provide results for the CI system So to implement this build system we use the octal which go really quickly over this because it's kind of basic information about the echo but Just for those of you who don't know we opto. It's a Linux based cross-completion framework it uses as Forces metadata which is can be configuration files or recipes that implement tasks It has a task scheduler which is called bit bake which consumes its inputs the metadata and Generates as outputs packages images to change SDKs, etc One of the the characteristic is as Performance of bit bake it can be really fast, but it compiles a lot So one of two of the main things for performances is the shared sys routes and the caching of bit bake so we can we will see in the next slides that Especially the sys routes, which is shared can be a source of problems in our case So some neat features and characteristics of the octal. It's very flexible We can basically do anything we want with it. We have very fine-grained control on the output artifacts We have the possibility of Configuring it in compile time, which we cannot easily do with Package-based distributions. It's very extensible. We can add or extend it It provides Very useful thing which is license tracking so we can specify what license We cannot ship which it's kind of crucial for legal reasons and It has commercial and community support both of all of them are very good And another neat thing is QHX which we extensively use to guarantee some basic quality on our projects So some words on source code management As Miko said we have some component builds and system builds. So this is about components Basically the source can come in these Three types we can have public open source sources Internal projects internal to the company and binary deliver deliveries from suppliers So it's usually in subversion and it's really binary. We don't need to compile this So here is the more complex. Oh, just some words here To give you a better idea what software components mean for example take this box It's usually a single repository that you just fetch the source or the tarble and compile And the system components is actually a little but a little bit more elaborate So we basically have Yachto project Some open source meta layers some proprietary meta layers that we keep in the company All these components are git repositories and you we use sub modules to manage them so to have a Sorry a revision in our repository or a version. We just tag the base repository with git sub modules There are some drawbacks about using this approach. It can be a bit a bit confusing for new developers coming to git Adding or removing sub modules is a bit tricky to Test in our CI infrastructure And it's not nicely integrated to Garrett git web or a git tools for example Ripple which we haven't tried yet in this project is has a much better integration to get it But we unfortunately we don't use it So we use you we use Garrett as a cold review tool and server for git repositories We use the the concept of topics to group many commits So when we want to build a change a change is usually in a topic that can be composed by multiple comets We have a custom tool to check out the topics into our local copy of the idea repositories So we can easily test a topic for example in a local machine CI jobs can verify change with the same topic and that's what we use for verification builds a Positive aspect of the use of Garrett. It's quite straight forward for experience experienced developers It works quite well if people know what they are doing but it's a bit bad for developers with not a lot of experience and So they make some mistakes like mixing unrelated changes in a single git repository for example and that is under the same topic usually that should not be like that and For example, they could try to merge comets that are not part of the same branch too Yeah, the UI is not very good. It's a bit confusing and In our case the Garrett version that we are using is not really up to date There are some alternatives like patchwork git have a git lab, but we don't really use them. So about the source code Change integration for the the component Case which is the simplest It's very easy. It's just a normal git workflow. We commit and merge changes And the system integration. It's a little bit more complex Because it a single change can involve multiple repositories So we have to create a topic submit it goes to a verification build and it also goes to peer review and once we we get a positive review and a successful build it goes to a Board of managers that may approve or not this change So only after this approval that we get the change integrated Okay, so then I'll explain a few in a few slides our our CI pipeline for the software component builds like I said we compile in the SDK and The developers work with the SDK then the developer developers push their changes to get it for code review And their peers need to review the code for with a plus two and then the Garrett also triggers a verification builds within the SDK in our CI environment, which also executes unit tests in case the verification is successful The changes can be either automatically or manually merged into the base repo of the software component usually the master branch For the system integration we have two types of integration requests, which are basically multi Gintree pull requests They can be automatically or manually submitted from a component repo one if for example the master branch move forward with a couple of New commits and the only change that needs to be done is that the pit maker recipe has a new hash For the more complex scenarios We have a system integration carry topic mix for example There is a number of pit-back recipes in a number of different metallages to produce a new feature And we bundle these into a one and get it topic and then verify and fix those verify and review those So the multi-state CI comes with from the SDK build SDK verification for a single component then a complete system builds for system wide changes and also before releases We also tested our merge was successful and didn't break anything So here's our workflow developer with your software component developer works with kids changes pushes them to get it Into kids and that triggers an SDK verification build and the unit test execution in the CI environment From this weekend of verification results And then we require that the developers code changes are also reviewed So there's this to be a plus two in Garrett if all of this is fine Then the changes can be merged back into the system under the master branch of the software component in the CI environment For the SDK pills that SDKs come from actually released versions of our base system from big bake And we automatically updated the latest SDK every every CI instance The system says Verifications like I said a single component can change it's new into a new kid hash and in the bit big bake recipe Or a software integration works with a number of big bake recipes and my number of layers and pushes these changes into Get in a single topic This triggers a system built in the CI environment Which also executes them but tests in the real real target hardware environment if all of these passes They'll be a verification plus one result in in Garrett review And of course we require that the integrators also do peer review and there needs to be a plus two in Garrett If all of these are fine The system allows the integrators to send an integration request which is basically a pull request for multiple countries into the next stage So in the system release stage The input is the integration requests basically changes in big bake recipes or other stuff in the meta layers also classes and so on First days these changes go into a change control board Which which is then under control of our release managers who then decide what kind of changes are now prioritized for this today's release Or tomorrow's release and so on Then they bundle up a bunch of integration requests into which are the merge into the base repo story And and then this whole merge is again tested with the same system build build or images and also execute Hardware and any other but tests if all is fine Results come back to the CI system which automatically then publishes these results It tags the base key tree and publishes all build artifacts like images SDKs and caches and so on and the images Of course to the goal to the further stages of testing in our release process Yeah, as mentioned the integration requests are applied to test it in the full system builds There's a change control board which can which can control what such stuff goes in and These integration requests are collected together and pushed out as a release so daily Newly new releases can be created manually with the change control board who select which stuff goes in and when or Based automatically based on a timer every four hours for example comes out a release and all integration requests are automatically accepted But usually this is a bit tricky when you actually want to control what stuff goes into a release Okay, some words on our CI infrastructure. So as we mentioned already, it's based on Gary to get and some Subversion servers we use Jenkins to orchestrate all the builds We use mostly virtual machines at the moment. We have two bare metal machines They are quite powerful as you can see that in the numbers And additionally to these CI builds we have one daily or nightly build from scratch without Bitbakes as state cache. So and we also have some minor Services as file and cache servers database cluster and issue trackers Well, these are the slides that are going to be available. So I'm not really going to the taste about the numbers so you can quickly see them here and then Take a look at these slides. So then our test farm with Basically, we have a test from a special hardware including the real target devices. We have a Jenkins master which takes in a Test candidate whether it's an S sort of build candidate or a release Then it triggers or uses a Python based test farm framework with user traffic MQ to handle all the requests to different executors and Basically, we have 16 SDK and 20 virtual targets and 12 real target executors at the moment These numbers are varying based on time and day and whether the machines are still alive So besides a test time we also have some automated tests for the build artifacts Just to do some checks as early as possible For example, if the flashing tools are broken or missing from the release images Then we don't even try to put them into testing Some statistics here is a weekly static six so we run thousands of tests every week or two different test targets Unfortunately, I had to Filter out some of the details. So these are not really interesting except to give an overview How big our system is daily statistics we work in Europe most of the time So it's gonna be German timeframe and the office hours when the system is most busy and The execution times that we have the most time-consuming part in our case is the actual real target hardware test Where the flashing takes quite a considerable considerable amount of time for virtual targets and SDK Which we also test the time execution times are much shorter some some lessons learned From all of this keep it simple as simple as you can You solid foundations We haven't had some hard lessons that we did not use some distributed system technologies But I tried to hack around with Jenkins and SSH and our sink and so on and of course one of the Sad facts is if working in a company is that sometimes the corporate networks and the services provided there are not as reliable as for example GitHub this is true for our current company, but also other companies that I've actually at least work with and also automate everything including the server and system setup with Ansible and puppet and so on and Virtualization it might be a some might good idea for some IT managers But it actually is not good for build performance in big big environments So positive aspects we have done a system that works and it actually fulfills our requirements It's a bit of pain sometimes to administer and I also admit that also our users are sometimes quite loud and complaining that stuff doesn't work correctly The negative aspects like like I mentioned Jenkins isn't really a Disabled system even though it can trigger you can trigger remotely a job on another Jenkins master But that actually doesn't work quite Reliably and we haven't automated all bits and pieces, but we're working on that and trying to push everything into Ansible And some change in our CI infrastructure cannot actually be tested by the infrastructure or the CI jobs themselves For example rolling out new Jenkins versions or Jenkins module changes. So some details about the builds Well for the software component builds, which are the simplest we use SDK build SDK based builds So the SDK which is generated by the big big build is used to build the component builds The component software so as an optimization we use C cache in this case So the system build is the most complex and Long the longest one it runs Inside an LXC container with Ubuntu 14 of 4 We do this because as it's probably known BitPak is not really reliable when it comes to host contamination. We have some Licks into the build system. So we use the container to at least have a control on what may leak into our build so with this we can have some relatively control what Goes into the final product. Although we have may have some leaks in the in the build system and this container approach also allows developers to use whatever Linux distribution they want and The container change usually can be deployed faster than if we had to make some infrastructure changes So for the implementation of this we have a little wrapper Around BitPak, which is a shell script. It's Just an implementation detail in this case But we learned that we have to fail as early as we can as lesson learned and We should clean up stuff after we finish the process So some numbers on the meta layers we have Quite big scenario here for the build system. We have more than Actually, these numbers are not really up to date We have a little bit more than 60 meta layers more than 2800 recipes and more than 400 BB happens for BitPak For the configuration of the build system, we use the the stock local conf global configuration file for BitPak, we have some sad magic to set some configuration variables and Differently free from the stock BitPak We have a little script to determine the parallelization options for BitPak I'm gonna talk a little bit more into the tailors about this So we have a single recipe special recipe that we used to build everything In this recipe we have dependencies on Other recipes that create about nine images These images include the flashing and testing tools and we have some performance Issues with the build of images because images the generation of images cannot be really Parallelized because the package manager which is used to build up the image is a sequential We cannot install packages in parallel. Although we can build images We have nine we can build images in parallel each image. It cannot be parallelized We actually have some optimization with regard to the compression of images We use PXE for parallel compression. It's a parallel implementation of GZ and Our images are actually Tarbles they are not file system images the the flashing tools actually create the file system Deploy to the target correctly Okay, then few words apart our SDK. So we use a custom SDK instead of the octopsy version We do this in a bit different ways or SDK mixes target and native SDK packages In a way that is actually really transparent to the users The motivation is that developers have struggled with the cross-tool chain and across environment setup and made mistakes in the CMake setup in our way The complexity of the basically the cross-combination setup is shifted from the developers to us integrators who manage to SDK The SDK is also decoupled from the images so we can have some stuff in the images Which is actually not allowed for the developers to use but for various reasons we need to provide it in the images So we have a tighter control of the APIs that we expose the developers and urged them to use We'll use a custom namespace tooling is instead of a plain as CH route to execute the environment of the SDK without root access and Basically, that means that when you are inside our SDK normal commands like TCC make all the tools you make and other everything Just works out of the box From users perspective. This is just a lightweight CH route environment For the SDK we have also tests for everything because we've noticed that even trivial changes or trivial tests find bugs and trivial changes can trigger bugs We don't use the upstream SDK test. We have our own but we are working on getting some some collaboration done here And these tests are also executed every time when we build an SDK in the CI system We also have a Qt creator based IDE with a custom blocking to connect our SDK into the Qt creator So it's quite developer friendly And our SDK approach. Yeah, hasn't been upstream, but we're working on that as well Then our SDK is actually the execution controller in our CI Automated testing environment So we deliver both the test controller and the thing that we test together into the system and they automatically can be updated at the same time and Our eye package archive or package archive is performed with this eye package Where we of course build a number of additional tools debug symbols and development packages What which are not available in the SDK by default or in the images To do the complexity of our infrastructure with other multiple companies We don't have a single eye package repo story server or fortunately But we do distribute these these artifacts to different companies using various protocols and servers and whatnot Some debug tools are only available in the package repo story for example the gblv3 and we can't distribute in the images We don't at the moment support any incremental updates of the SDK or images because of we can't provide a single package repo or package stream and Also, we unfortunately don't run a PR server at the moment So we don't be bumps the diversion numbers of the packages when the binary package is automatically We are planning to deploy that but so far it hasn't been a huge issue for us So some as a feedback to the actual community some difficulties that we are having with the actual Since our team is quite large and not many people are very experienced with the actual We have some difficulties with writing proper recipes at the moment our Reference for quality in terms of recipes is the ones that we use from pocky Another major issue that we have is the shared sysroute approach that is used by BitBake that can lead to many race conditions and issues with dependencies that may Sometimes be available in the sysroute sometimes not depend on the the parallelization of the tests So that that leads to random build failures when the dependencies are not properly specified We have many problems with this In our case Mostly because of this issue with the share the shared sysroute is that our builds are at the moment not reproducible at least not if we use parallelization options if we Probably if we build sequentially it would take forever, but it would be reproducible, but that's not the case. That's not feasible in this case Another issue that we have is that some developers Actually use package managers from other languages like Javens Maven or JavaScript's npm And not properly integrated into Yachtel so they for example call npm from a do configure or do compile task and npm would download stuff from the internet and that won't be properly cached not properly verified by the BitBake Fetcher and they may lead to build issues And since it usually doesn't happen for the person who codes it They just assume that it's fine and push the results and it may even succeed in a verification build, but in Random cases it breaks builds Additionally this way of doing things wrong We don't have that we don't get license tracking which is quite important for us. So this is something that we have to address and BitBake sometimes Rebuilds some dependence even when it's not strictly strictly required for example when we are sure that ABI or API Compatibility is preserved But even if we change something the recipe BitBake will recompile everything depending on the task which is affected of course And this leads to long build times So here we have some numbers on the on the the builds. This is for machine We currently have a have two machines so we have more than two thousand 22,000 BitBake tasks and these numbers that really go quickly over them They will be available in these lights after that So our build profile Our build may range from 20 minutes to five hours on those powerful machines that we showed It depends on the caching how much bit makes caching the best case. It's 20 minutes in the worst case five hours In our experience build performance can be quite hard to optimize There are countless variables to tweak in the build including hardware variables system variables BitBake variables So it's really hard to optimize and in in our case We have some quite heavyweight components c++ based That are really interdependent. So we change one. We have to recompile a lot of things and still about performance and Our build profile we have some Nine images at the moment and they are not parallelizable So the generation of these images can be a little bit time consuming To help to analyze the the profile of our build the build stats Data as generated by BitBake has been very useful So after the build is done We still have some post-processing steps, which is basically checking the presence of the expected files In case of our release. We have to prepare the estate cash as generated by BitBake to feed the next builds pushing artifacts like packages images SDK logs etc and Well, as I mentioned after a release a new SDK is deployed into the system so we can build Software components using the new SDK based on the previous release Okay, then some optimizations that we have done first of all Regarding the build performance. So first of all you need to measure certain aspects of your build slaves, for example The basic things are CPU uses memory uses disk use its or local IO and network IO What I would so I would we have found useful is to use performance co-pilot and it's tooling for that You can quite easily see what is the CPU utilization in a parallel with multiple CPUs And is the memory effectively used or is everything going down through disk access and IO or even to network We also have a download cache where we have a separate build job Which runs a bit a bit bit built with the task fetch all and that populates and download cache for us And this is exported with an NFS to all our build slaves this Does not fully validate that the downloads are okay after fetch all I mean the combinations Or there's might be someone some download might have failed and the system might have not noticed it Or it doesn't also notice that if something is actually wrong in the setup So sometimes we have had that corrupted downloads that have led to build failures in the build farm And that's really tricky to debug and ideally of course We would like to like to run all builds in an offline mode and with no network access in Bitbake and the so on but unfortunately this hasn't been the case in our environment at least not yet So about the parallelization settings of Bitbake Bitbake Currently has two main variables to customize in case we want to tweak the parallelization It used two variables be be number threads and parallel make Both of them by default set to the number of CPU course in our particular case doesn't scale very well because as we mentioned We have some quite heavy C++ based components that take a lot of memory to compile So in our case for example, we have builders with 16 CPU course If we multiply BB number of threads and parallel make we have in the worst case 256 completion tasks running at the same time that can be quite heavy if we are running C++ compilers So what actually happened at is that some builds just crashed because of they ran out of memory So as lessons learned We had to measure and set resource limits for big big tasks That's currently not implemented, but ideally it could be implemented with seagrups So we could crash the specific task that is causing the hour of memory error Ideally the Bitbake scheduler should take into account the the system load when you schedule a new task It doesn't happen So if the system is completed trashed Bitbake will happily schedule new tasks, even if they are completion tasks and as I mentioned before optimal parallelization is very hard to get for example The builds depend what we build depend on what's cached. So when we have Lots of caching high parallelization is desired when we have low caching So we have a lot of compilation low or less parallelization is desirable So what we've done and we mentioned before is we use a custom script that instead of using just a number of CPU Course as input. We also take the available memory as input Here's the basic logic of this script. It will be available for you to take a look in the slides after that Okay, then we have tuned our build slaves a bit So basically what we want to do is to avoid disk IO as long as possible That means that our Bitbake builds basically run with RM work enabled So that means that any task when it's executed and the data is no longer needed Bitbake will clean up the workspace for that Here is some sys control settings for Linux to tune the VM values so that The rights to IO are avoided. You can also go in the details later on and also basically we want to avoid swapping As much as possible and lots of RAM help up to a certain point But for example on our machines, we tried to upgrade from 64 gigabytes of RAM to 128 gigabytes of RAM with C4 the CPU course and build times. They don't improve at all So more aggressive parallelization options easily to system trashing and that's actually slower builds so our solution has been basically to try to experiment and With a build profile and tune the parameters and actually measure the results Then we have some quality assurance and security stuff that we also do in our in our system We have studied coordinates is using code sonar. It basically uses Served programming rules to and checks for these add-ons in the code Basically finds memory leaks profit over force and race conditions and so on it's basically similar to what Covery did us If it's if you're familiar with that All of our bit back recipes are compiled using code sonar compiler wrapper This is slow. We cannot do incremental builds at the moment and it takes roughly five days to execute So we do it basically weekly. It's completely automated, but we cannot integrate it easily into the sea I work for but we provide these reports for example every week Then we do open source license compliance checking basically we use the license information provided by the bit back recipes But that additionally we also have black ducts Tooling for this and we analyze the source code for license violations. This again is also automated But not directly connected to the sea I work Also, we're interested in security vulnerability analysis tooling and we've seen the patches floating around in the Octo Open embedded core mailing list and tried them out and they seem promising But we're also looking into black duck because it provides similar features and we already have the tool so now we are going to the conclusions on your project which is our main tool to to build the our software In our experience the community support is excellent mailing lists IRC bug tracker is very good The documentation is also very good, but the system is very complex. It's not easy As we mentioned before the the the the layers provided by the Octo are our reference in terms of quality And it's very difficult for us to to reach the same level of quality us as the Octo projects layers And we also have some problems most of them Due to the design decisions made for bit bake the shared seas routes leads to Race conditions and depends in the issues in our builds a Bit bake in the Octo in general has a huge amount of global and multiple variables that are changed all along the build In our case, we don't have Reproducible builds that's something we are working on to achieve in the near future Okay, so let's start about whole see I set up Basically see I systems can be used to automate every task or required in the software development process See I system builds find bugs even testing even if trivial will find bugs and that's really good But and culture cultural status requires will be working the automotive environment and some developers a product partners actually Appreciate the fast feedback that the system provides Unfortunately, some don't some don't like to that their code and bad code is published and they get the penalty right away Then quality of service in like I mentioned in the corporate networks can make some see I said that's really difficult This is because of rely reliability is more mostly chained This means for example, you have a source code server with 5% failure rate For example, do the network issues or whatever or a built failure build reliability due to the mentions mentioned fact in the Bit bake for example of 10% and that you have some instability in your testing Maybe hardware related and so on that you have 10% failure rate there And if you can't them these all together by multiplying the values You will find find a 23% failure rate even though you haven't changed anything in your system And that's something that our users are of course complaining about with this happens a lot And if you do the more stuff you do in your more see I system the more variables we will have and all of this adds to Instability, but on the other hand the system actually does work So that was our presentations. Would you have any questions to us? Go ahead Yeah That's a good question. So the question is why did we choose the October product? I think it's because of our our partners the suppliers in the automotive domain We're familiar with doctor and basically provided the sort of the platform with this as a support with support So we use commercial support from our supplier for your That doesn't mean that sometimes all issues are fixed in their supplies, but we also do that interact with them And we are open to suggestions on something better Because in our experience and as far as we know we don't know any better tool for this task And as a personal note, I've been checking out the DPM based Discussions here. Yeah, go ahead. We have hundreds So exit numbers, I don't even know because some of them are hidden behind behind other companies But we have hundreds of developers working on the system We don't have hundreds of developers working on the BitBake system, but overall we have hundreds of developers Okay, how big is a team in that set that set this whole thing up? We started off with a few developers working on on the BitBake environment and SDK Then it grew up at the moment that we have 10 people team But these are also used as an experts in integrating whatever needs to be integrated into the system So they are consultants in the whole project. Then we have infrastructure team I think it also was the one person doing it initially ramped up to now I think five persons then we have a whole team Maintaining the test infrastructure test automation stuff and I think there's more than 10 people at the work working on that So these teams are quite big, but on the other hand We can get in a two hours all of these results back in the project per day At the moment there is a bit of a limit with the integration So it's a bit difficult to count. So in the Yachter world, we can quite easily count how many Git commits Coming in, but it's not difficult or it's a bit more difficult to determine how many changes happened in the software component trees for example at the moment, I would say that we are working on two to three releases a day through the system and each release would contain I would say At least a 10 10 integration requests, which are some kind of features implemented in the whole whole system Those are changes relate to the layers. Yeah, exactly the software components. So the software component change may be 100 Okay Next question jet throw So we have multiple levels of testing So the testing that we use in in the tightest and fastest the CI chain and does not include anything But the target hardware and an SSH connection to the target hardware So all that is no hardware is in the further stages of testing Which we also have automated, but it doesn't happen inside the CI loop There Yeah, so I guess the first question is how does it relate to Genevieve? This is a BMW product and BMW is quite heavily involved in Genevieve So we have a number of software components from the Genevieve work in our system That's the one thing. Second question was about using C-Cache and other built performance enhancements So we use at the moment state cache heavily C-Cache would be nice to have in some environments, especially where you can easily store it Transmitting or transferring C C complete C-Cache of the whole yokto built into some machine and distributing it across machines That is I guess way too heavy for anything But if you have a local team specific builds ever were usually as the couple of components are built then Using a local build slave specific C-Cache would be nice, but unfortunately yokta doesn't officially support it at the Yeah, exactly Okay Yeah, so we have a team who builds the test test infrastructure builds the test frameworks for example They do write some tests as an example But in theory that the the people and developers of an individual component should write tests for that component Also this integration testing so they should write the unit test and they should write the sort of Test that you would run on a target hardware when your component is updated then that its interfaces are up and running Unfortunately, this isn't always true and the quality level of different developers varies a lot and that causes problems then in the CI of course there Yep, so we as far as I remember we use state cache mirroring technique So we we prepare a state cache mirror. I think there's some sim link hard link magic involved in that I can't remember any of the details Yeah, we store them. Yeah, exactly. Yeah, basically we use the output of a release as input for the the compilations after the release of Yokto So the question is how how does our SDK work with the yokto SDK? I think it could be an alternative. So It our SDK is honestly a bit too easy to use We've seen quite a bit of software components where they actually have hard-coded path and so on and we don't Actually then writing the bit bake recipe for the stuff Which was used compiled in the SDK becomes really difficult if you find that someone's just hard-coded the path of Usual lip include or use usually or usually include What I would like to have is that our SDK would export CC and LD flags and so on environment variables But we don't do that at the moment. So I see it as an alternative to yokto We could discuss it and we've been actually maintaining it across multiple yokto releases already. So Yeah, we'll we'll see There Okay, so question is how to maybe prevent yokto from rebuilding everything so I'll just let you know on how how we're thinking of this could be sold So one we could run there is an open source API API checker tool change It could be used to verify that for example a library did not break its API and that tool would be used to detect that for example Now there's no need to trigger any more bills. It's fine that this library was compiled alone. It's still compatible It's not part. So it's an open source tool We have some proof of concept hacks where we tried to see if this can't be done And for example our SDK has this tooling also integrated so developers could use it But now the big thing is would be to hook it into BitBake somehow And as martyr use of a state cache could help to if you merge several as state caches into a single one It could help but it's kind of a lot of work to do. Yeah Yes So the question is how our SDK is big and how do we update it or how developers update it? It is true that our SDK is big and not not many components actually change that often in the SDK Unfortunately, we've had some problems. They're distributing or providing a single package repository so we can easily provide updates to them to developers Our SDK is big but on the other hand I think there was something also in slides that we are able to in the CI system And also developers can extend nowadays their SDK SDK environment for their packages if they want to try something out or install debug symbols and some additional tooling It is possible, but it's some requires some manual work in the CI system. It's actually automatic and also in the test environment We actually install our target images for example They don't have anything except SSH enabled all the other stuff comes from the SDK and is installed on demand on the target To for example enable some testing so if the test requires special tooling on the target It's not actually in the images and we just install these when preparing for the testing on demand Not a good answer maybe but but yeah, I cannot answer any better. Yes I think how do we run unit test was the question So I think we run mostly a GMock. Oh, sorry Google tests that those based G test based testing unit test. That's what we recommend We don't actually care how the developers do that, but we expect that there will be a make test basically in their build chain And this can be configured We run it we run it in in the SDK environment in which in our case mixes both target and Native binaries, so it's running with the real compiled Targets and that requires that if the if the execution environment for the target is different from your Architecture that the the build service are running then QM wisdom world We have a question here One It means that one so a question was about unreliability example So this wasn't a real example I think we are a bit better in reality But it's just an example if you connect unreliable things into a chain The whole throughput of the chain will be multiplication of those unreliabilities and that means that every single CI Execution whether it's an it's a topic built from a developer with a carry topic Whether it's a it's a release candidate will have failures with that percentage for example this 23% in that example So in many cases you need to re-trigger things because of various reasons and the more stuff you had in having the chain And if they are not always nine nine nine nine percent reliable then then be in a bit of trouble Okay, yes It's fundamentally it's It's fundamentally broken Let's put it this way because it doesn't guarantee that because it pretty much depends on the Parallelization options and how things are parallelized So if you build a software that depends on another one and if that one happened to be compiled before And you don't have a proper specification of dependencies this can build to unreproducible builds one comment to that so the most visible way that are Unreproducible builds are visible in our setup is that CI system is fine It's great everything builds but the developer who tries to build he doesn't have a maybe the same download cache or state cache And tries to build something and it fails. He sees a clear bug somewhere, and that's really annoying Unfortunately not yet Yeah Okay, so there was a suggestion to try Krogoth and and hope that things are better there. Okay Yeah, one of the approaches we are trying to implement is the per recipe As state cache. That's actually a file bug in the October bug tracker about this Yep, exactly Exactly, that's that's true, but it's somehow easy for developers to introduce these kind of issues and ignore it For sure for sure Okay, if there are no questions. Oh, there are still some questions. Okay, go ahead Ah How all is our monster see I set up It's actually I would say one three years old and we've been jumping through different Jethro over pokey versions And I'm using different prototype environments. So this has been ramped over the years already. So we quite quite know how it works and for sure we would like to upstream many of the solutions that we have developed And now we have had sort of prototype products Proof for example when selecting the right hardware for our product We've had the same setup already running with the prototype Prototypes from various windows basically pause Yes, I mean we are active. So the question is how can we share or how you can maybe contact us? There's a email address over there We are also actively on the octo and open embedded core mailing list Some of our guys are sending patches whenever we can Unfortunately, we do have some time pressure to actually finish the product So we can always work on the open source stuff and push stuff upstream, but we try We're also on the I'll see some I'll see channels also asking questions and we luckily get really good answers there So we're there in the community Actually, we come from So but it's this is the whole project is distributed in Germany and other countries. Okay. I think time is running out. So thanks a lot