 Felly, ydych chi'n gwybod i gael'n gwneud hyn. Yn y gallwch chi'n gweithio y sgwrrr, y cyfnod y ffyrdd yma'ch ffordd y gwaeth ar ôl. Yn y gweithio bwysig yw'r cyffredin am ffyrdd yw'r cyffredin a'r cyffredin a'r cyffredin a'r cyffredin a'r cyffredin, yw'r cyffredin a'r cyffredin a'r cyffredin a'r cyffredin a'r cyffredin a'r cyffredin. Yn y gweithio ar gyfer y gwaith, yna, sy'n 50 yma yw hynny. depending on source code which is the 1950s maybe it's written in for trauma and we want to produce binaries that we can run in a computer so we have to run a compiler. So far so easy but nothing stays simple in software for long and so now we have a whole load of source code and we need to run different compilars and we need to link it all together before we can run our programme. This process becomes fragile if you have to do it manually maybe I make a change here But then I forget to run this stage so now I've changed this but this is the wrong version and I'm testing for a bug that doesn't exist. So fast forward to the 1970's. Disco music is big in the 1970's and it's the golden age of unix let's say, and people are sick of having the problem of I forgot to compile this and I'm testing the wrong thing. So a tool named make was developed. You could say it's the first popular build tool and make solve this problem by describing this file depends on that file, this file depends on that file in a way that a machine could understand. So you outsource the problem of building software to a machine. Great, so make solves this problem of tracking dependencies between what file depends on what, but it doesn't solve every problem related to integrating software. Maybe I've ond gofyn cymdeithasol yn dysgu y gondol. Maic nesaf iddyn ni'n rhaid i chi. Maybe I built something on one computer and it worked fine, but then I built it on a different computer that a compiler version is different and now it doesn't work. And Maic nesaf iddyn ni rhaid i chi hyd. So, fast forward to the 1990s. Friends is massive. The open-source community really gets going and the free software movement. A dyno'r fath o gyfweld, dyna'r dwylo'r cymhysgol ar y cyfnod o'r ffordd, yn gyrddwyr, yn gyrddwyr ac yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr, yn gyrddwyr. A yna'r bwysig yn cyddiadol ym mhwyddoedd, yn gyrddwyr i'n rhai'r gweithio ffortig i'r gwrddwyr. a'r ysgolwysg yma yw'n gweithio'r gwybodaeth. Mae gennymiau ddysgwm, RPM, ac rwy'n meddwl am y ddweud o bobl syledig. Felly mae'n gweithio'r ddweud o twfwyr cymryd yn dweud eich gweithio. Yn gyfroedd yma mae'n cyflwynt gwahanol, mae'n gweithio'n gweithio'r ddweud, ond mae'r llwyddyn nhw'n gwybod i'r llwyddoedd yn y fawr mae'n fawr i'r llwyddoedd yn cael gwybod i'r fawr sydd y llwyddoedd yn y fawr. Felly y brwyntag dyfodol yn ychydig. Felly, ei ddweud y ddweud y chlasol yma yn yllynu a'r cynyddiad yma'r cynyddiad yn y Llynynych Cymru, mae'r llwyddoedd yn gwneud y llwyddoedd yn cael gwelwch yn y llwyddoedd, ac yn cael gweld yn y llwyddoedd yn ei ddweud, Ond ydych chi'rhlelydd anhyeries yw ychydigbeth fydd yry wizain o'r gerant ac mae'r trol passengers when you're working on integration? cleared and then fast forward to today. There's been loads more innovation in the world of integration tools now we have new tools, which are similar but different to packaging tools we have containers and container build tools and we will have continuous integration which I am not even going to begin on in this talk so everything solved right we have all of the tools we need to integrate software We just need to find what's the best tool to use y dyfodol, oedd y dyfodol, o'r ddwg ddim yn y dyfodol yn y dyfodol. Yn 10 ysgol, wrth gwrs, mae'n ddim yn ystod yn ei ddweud. Mae'n ddweud ychydig o'r ddwy'n gweld, yn ydy'r ddwy'n ddwy'n gweld, mae'r dyfodol yn y dyfodol yn y dyfodol. Rwy'n dechrau i'r 1970, i'r ddwy'r ddwy'n ffilosofi, oedd un prôgr am ffocus yn dwy'n ddim yn un rhan. A dyna'n ddod y cyfnod y gwaith ymlaen, ydych chi'n gwybod y gwaith yma? A dwi'n gwybod, rwy'n gwybod yma'n cyffredin am y Pwg Ffodol. Yn y cyffredin, mae'n gwybod yw'n gweithio'n gwybod yw'r cyffredin am y Pwg Fodol? Rwy'n gwybod, rwy'n gwybod, fel rwy'n gwybod ar y gwaith. Yn y gweithio, this was introduced by the team working on the Basil Build Tool. They wanted a way to send build results to a cache and send builds to a build farm, and they thought, well, why don't we make this a standard, so that other tools can collaborate on the same infrastructure. That's a really key insight because up until now, build tools and integration tools haven't y gwellaf o'r dweud wedi cael ei wneud i'r elu hwn. The key ideas are content-addressed storage. This is really crucial. We've had content-addressed storage for a long time. Now, git is a great example that each commit is addressed by the contents of the commit. So you know if one commit is the same as another, because it has the same hash. We can apply that to builds as well. If you hash a binary, felly mae'r bainery ei wedi cael bainery這樣ai geloedd y séch yn ystafelloedd dau. Ond byddwch yn llunach, felly mae eich bainery wedi cael bainery ysafetio'r ddyn nhw. Mae'r bainery wedi cael ddyn nhw ac mae'n ddyn nhw i'r ddifomos ar dansaeth ac mae'n ddyn nhw i'r bainery bach. Mae'n ddyn nhon eich ddyn nhw fwy o eich dden nhw, ond mae'n ddyn nhw i ddyn nhw i ddyn nhw i ddyn nhw i ddyn nhw, Ac mae'n dweud am ddwy'n meddwl. Yr hoffa ar un o'r ffordd o'ch cyfnod. Yr hyn sy'n rhai o'r Cymru, sy'n dwy'n meddwl y dyfodol, mae'r gweithio cyfweld gyda'r cyffredig ac mae'n gwybod i'n ddweud ddyddol i'r mhwyfawr. Mae'r gwybod i'r hoffa i'r hoffa i'r hoffa i'r hoffa i'r hoffa i'r hoffa i'r hoffa. Felly, dyfodol y bobl wedi gweld cyfwelig o brinion yn unigol cyffredig. Rydw i'n gafod o'r ddweud mewn iechyd, roedden nhw'n meddwl eich llawer yn bwysig unfodol i'r gwaith gweld y teimlaen. Rydw i'n gwaith fynd yw am ychydig, mae'r gwelinio, ei ddod o'r ddweud hyd wedi siarad. Mae'n astud tu o'r engra, amser, yr sgolio, mae'n ddweud o'r ddweud o'r ddweud o'r ddweudio ac wedi ganddo i'r pinafiau. Fydw i'r panfydd yn i'r dda iawn o'r ddweud o'r ddweud. So, when we look at the remote execution API, this model changes slightly, so we have the same inputs, source code, dependencies and the build configuration. But we first hash all of that to come up with what's called a cache key, and then we can look in the cache and say, do we have any binaries matching this cache key? And if we do, then we can fetch it from the cache we don't need to build again. And if we don't, we run a local build, we can push it to the cache, and then we're done. There's a couple of important things here. One is that the cache and the build don't need to happen locally now. They can happen, the cache can be on a remote server, the build could be a build farm. The other thing is very important that this captures all the inputs, because otherwise we might not notice when something important has changed. And to ensure that things are fully reproducible, we must run the build in a controlled sandbox environment. So when I say a sandbox, think something like a container, so we control what devices are available, we limit network access. For example, we might set the time to a fixed time to avoid the time affecting the build output. And that allows us to trust the cache and know that we really don't need to build again as long as the inputs haven't changed. So that, I think, is the key insight of the remote execution API. So the thing came out of Google, right? So there wasn't a huge fanfare when it was announced. It was just a message on Google groups, and there was a link to initially the standard was kept in a Google document. Nowadays, the standard lives on GitHub. There's the repo. And the actual implementation, like the standard is defined as a set of protobufs. So if you want to see what exactly is the remote execution API, the answer is in this file, which is a protobuf definition. And then that can be compiled to a load of different languages. A build tool that supports the remote execution API is called a client. This is a list from the readme. And there are six tools that we know of right now that are supporting the RE API. And they fall into some different categories. The simplest, in a way, are the top two, GOMA server and REC. And they replace a compiler. So they're a drop-in replacement for your CEC plus compiler. And very easy to integrate because you just, as with the older tools, Ccash and DysCC, all you need to do is make them available on the system running the build. Call it CC, and make will run REC instead. And like Ccash and like DysCC, they will look at the source files. And if a build already exists in the cache of that C++ source file, it will download the object file instead of building it locally. And alternatively, it can distribute it to a build farm to run the compiler rather than building it locally. So these are the simplest to get started with. They're drop-in replacements for compilers. But they don't solve any of the problems we have around software integration. They don't deal with dependency tracking or making sure builds are repeatable or integrating different components together. So let's look now at what I would call the build tools. You can compare them to make or anything else from that sort of 1970s build tool category. So they track dependencies at the level of individual files. Bazel, Pants and Pleas, they're all variations on the design of Bazel. The story of Bazel is that it's a build tool internal to Google called Blaze, and various people over the last ten years left Google and thought, oh, we really miss having Blaze available. And so they implemented it from scratch various times. And in parallel, Google worked to open source Bazel. So now we have three or four different tools built around the same model. Different strengths and weaknesses to each one. I personally think Pants is the most interesting, but I personally haven't used that many of them. In general, they solve the problem of making builds repeatable. I put partly for some of them because they don't sandbox things. Certainly Bazel and Pants don't do what I would call strong sandboxing. They will copy a source file to a new directory. And that's it. They won't isolate it from device nodes or network access or anything else. So there are more ways you can introduce indeterminatability into the build with Bazel and Pants. And they also, these tools are all designed around the idea of a mono repo. So the mono repo is this idea that all your source code in a whole company is kept in one big version control repo. And if that's how your company works, then perfect, you can adopt one of these tools. But in the open source world, we generally don't work like that. And integrating a project with lots of dependencies using Bazel can be a headache because you need to wrap. Each third party dependency needs to be declared for Bazel to understand it. And that gives you a lot of extra maintenance work. So these tools can be useful. They're definitely worth investigating, but they have a cost of getting started. Of course, also, if you already have a build system, you need to rewrite it so that Bazel, Pants, or please can understand the dependencies. And that can be a lot of work. So you want to build GCC with Bazel. Good luck, you're going to first have to rewrite 20,000 lines of automake and autoconf. So the final tool category is that of integration tools. These are designed, you could also think of them as package tools, although they work on more than just packages. So these work on the idea that will build a whole component using its existing build system. So we could build GCC, for example, using its existing build system. Buildstream is the only tool in this category right now that supports the remote execution API. There are plenty of other tools that don't. Bitbake, build routes are the most common examples. You could possibly put Docker build in this category as well, although that's another discussion. And buildstream solves pretty well the problem of making builds repeatable because it has quite strong sandboxing. A build happens in a container with quite limited access to the outside world. It solves the problem of integrating from different places and different repos. But the catch is because it doesn't have this knowledge of the file-by-file dependency level, if you make a change in one element, you have to rebuild the whole thing. So if you're building WebKit and you change the readme file, buildstream is going to go, okay, the source code is in, you commit, so I'm going to rebuild from source and you have to wait for a whole WebKit rebuild, even if you change the readme file. So that's the trade-off. Neither of these tools are perfect, but that's the trade-off. Some things to be aware of. When I asked my colleagues that are Bazel experts how they feel about Bazel, I got some mixed reactions, let's say. So I already talked about how you have to rewrite the project build system to make use of Bazel and that it can be a pain to integrate third-party dependencies. It also doesn't have a plug-in mechanism, so if you want to build a language which isn't already supported, that can be tricky. Because of its nature that there's a closed source version of it inside Google and an open source version of it, it's tricky to upstream stuff because the team maintaining it have to keep both versions in sync. So it can be very, very difficult to land stuff upstream compared to a regular open source project. By far the biggest complaint, though, is that the command line interface has 100 options. You run Bazel-help and it gives you the screenfuls of text. So check it out, but check out the other tools as well because maybe they'll be more suited. Of course, build stream isn't magic either. I already mentioned the main downsides, but as well, the sandboxing can never be perfect, right? So the aim of the sandbox is to guide your builds to being reproducible, ideally bit for bit reproducible, but nothing is foolproof if you want to implement a random number generator in your build based on the time of the git commit, for example, then you can and there's nothing we can do to stop that. So it's a guide to making reproducible builds, but it can never be perfect. So that's the client side of the Remote Execution API. Let's talk a little bit about infrastructure because the whole point here is we have a separation between the build tools on one side and the build farms and the caching services on the other side. You can plug and play as it were, so you're not restricted to using one specific cache, for example. There are three big projects that are developing infrastructure, build farm being one, build barn and build grid. All of them are easy to spot because their names all start with build and they have fairly similar capabilities. All of them support caching to different back ends, either a local disk or a sharded Kubernetes setup or Amazon S3 API. All of them support running commands on a machine, although build grid has support for some extra features, such as running inside a specific container. So that can be worth looking at. They have different implementation languages, so if you prefer Python, obviously look at build grid. Two please servers and scoot have included for completeness, but they don't provide a full capability compared to the others. Please servers is designed just for use with please and scoot is just a build farm without a cache, so they provide less overall. These are all worth checking out. The downside is, firstly, they can all be quite difficult to deploy, so in an enterprise where you have Kubernetes experts to hand, no problem, they'll be able to set it up for you. In an open source project, if you want to set up a build farm setup, for example, you're going to burn through a lot of volunteer time maybe unless you have someone on hand that enjoys spending their weekends coming up with Kubernetes pod deployments. So I would like to see better documentation and some easier ways to deploy this infrastructure. Protobufs work fine, but they have some issues, let's say. Sometimes the Protobuf package in common distros has a bug that may cause a crash, for example, so in the build stream project we end up bundling a specific version of Protobufs in order and static linking against that so we can be sure that it works because we had a lot of problems of, oh, it crashes on Fedora 36 and it turns out the version of Protobufs in Fedora 36 is broken and no one's fixed the bug yet. And it's kind of tricky to fix bugs as well because it's another project where the development isn't completely in the open so you look at the commit history and there's one commit which has 200 things lumped together without much details of what went on. So it's not super easy to contribute to Protobufs either. That is what it is. Mostly it works fine and any standard is better than no standard. My other tip for working with remote API caches is treated as a cache. If the cache disappears tomorrow, your build should still work. Don't think, oh, we can, as long as we rely on this existing in the cache forever then we can use it for releases or whatever. Don't do that because cache expiry then becomes way more difficult. You don't know what you can expire and what you can't expire so if you're setting up a cache, make sure that you can delete the whole content of the cache and your builds are slower but they're unaffected. Okay. I'm going to finish talking about the remote execution API now. My last slide is about what I would like to see. I'd like to see more build tools supporting it because I think many could. Imagine a world where distributions could share some common infrastructure. Debian and Red Hat-based distributions could share tools. This is an optimistic idea maybe but I don't see a principle why it couldn't happen. I'd like to see build stream get faster to work with when you're a developer. At the moment it's great for integrators but it's not optimized for people who are making changes every day to a specific element because you have to rebuild from source every time you make a change in most cases. On the infrastructure side, again I'd like to see wider support. I think Artifactory in particular would be fantastic if it supported REAPI caching because a lot of organizations already have an Artifactory so you could get this for free effectively. Like I said, I'd like to see easier deployment. This talk is divided in two. I think just a halfway mark of the talk as well so I'm timed perfectly. In the second half I'm going to focus specifically on build stream which as you've already seen is the first tool of its category to support the remote execution API. If you've not heard of it, it's an integration tool comparable to build route or bit bake in some ways. It's open source, it's recently become an Apache foundation project and the 2.0 release which is due imminently. The 2.0 release is ready and is waiting on some final approval within Apache and it's going to be out in a couple of weeks. The latest unstable tag is actually going to be the 2.0 release. It's just waiting on some Apache foundation paperwork. The build stream itself dates from around 2016 so the design predates the remote execution API being public but it already had a similar design of strong caching where we hash the inputs and we avoid rebuilding if the inputs haven't changed. The motivation for the 2.0 release and the internal changes that led to going from one to two are redesigning it around the existing REAPI standards so re-implementing the core to support the standard rather than coming up with its own protocol. The 1.0 series had its own custom cache server and things like that which isn't needed anymore. The mascot at build stream is the build stream beaver because beavers obviously build things in streams and also the original developer is Canadian so it fits perfectly as a mascot. So I'm not going to go too much into the details of using it but I'm going to show a little bit. I've just realised the text here is probably too small and too black to read so let me rather than show you this slide I'm going to go into a terminal and show you here. This is an example project which is building GNU Hello so it has three elements and the final element is this one called hello.brt. Look at that it fits on the screen perfectly. So you've spotted that elements are defined as YAML files and element is a unit that corresponds more or less to a package although it doesn't have to be a package it can be an image or an app or anything and we specify firstly that we use the auto tools element plug-in so that means we don't have to specify run, configure and make because this element plug-in already brings in defaults that do that. We modify the configuration here so that configure and make runs inside this sub directory which is specific to the GNU Hello package and then we specify the source, it's a tar file it comes from this alias URL and this is the content hash of the tar file so that we know that we're getting what we want. Of course the build happens in a container in the sandbox and so by default there's nothing in there, there's no compilers, nothing. So we need some sort of base to build on top of and for this simple example the base is an alpine linux container so this element is an import element and it's importing another tar file which is some prebuilt binaries from alpine linux. The final part of the project is the project configuration so here we import some plug-ins such as a git source plug-in and the auto tools plug-in we define URL aliases which is just a good practice so that if you want to use a mirror, a different mirror then you don't have to rebuild because the alias hasn't changed and we specify that we need buildstream 2.0 and since I'm here I guess I can show you a build I'm running buildstream from a VN because that's how you install one and two in parallel you don't normally have to install it in a VN if you don't want to there's a build very fast, right? What's happened is I already built this so what it's done is it's worked out the cache key based on all the inputs and the configuration and it's checked in the local on disk cache and it said this is already present so I don't need to build it so to make this a slightly more interesting demo what I can do is delete the existing artefacts from the cache and now they're gone and now when I run the build it's going to build from sus so this is actually running the new hello build command and there we go, that was still pretty fast quite a small package but well done and I can now check out my hello element one thing that might be a little surprising is the checkout is not just a binary the checkout is the whole fan box it's the alpine base container so to demo it what I actually have to do I did this before and realised that I oughtn't need to specify a different shell I'll charoot in there and just show you that here's a hello world program and if we look in et cetera at OS release it just confirms that I'm not lying when I say it's an alpine Linux base so of course you can if you're building an application for example you don't want to check out a whole operating system with it you would do that by creating a separate element that doesn't include the base so there's a type of element which can deploy your application in a certain way for example building an RPM package or a flat pack app and then you would check out that and it would be separate from the base so that's the gist of build stream these are the illegible slides and the last thing I want to do is give a couple of case studies of how build stream is being used in real world open source projects how many people use the flat pack app I don't know how to describe it flat pack app runtime everyone here I'm hoping has a linux desktop and the best way to get apps on a linux desktop these days is with flat pack and most of the apps you'll encounter in the real world are using a base which is the free desktop SDK runtime so this is a big project it's being deployed on the millions of linux desktops in the world that are running flat pack apps and it's built with build stream so a little bit about the free desktop SDK like I said it's a runtime and an SDK for building linux apps it has about 600 components for 4 architectures it's mostly powered by volunteers it does have some sponsors for infrastructure some development but it's mostly volunteer powered which means the development process has to be super efficient and to achieve that they use a combination of a GitLab CI to automate as much testing as possible and build stream so here's an example pipeline for a random commit the commit is updating this package this python package and built for 4 again this is slightly illegible but it's built here for 4 architectures ARM64, Intel 32 and 64 bit and RISC 5 64 bit and the results of these builds are pushed to the remote REAPI cache that the free desktop SDK project hosts so this is gigabytes of stuff that gets built here and then the next step you can build some virtual machines just for testing 5 gigabytes of binaries from one GitLab pipeline to another that could be slow but using the REAPI cache is much faster because even if these builds happen on separate runners it's pulling from the cache on the second build and the cache is local to the GitLab runners so it all works pretty quickly and there's not really a need to build from scratch because of the strong caching and the bootstrap then updating this python package will only rebuild the things that depend on it and the updates themselves are actually automated so this is an automatically generated merge request which updated the version tag in the YAML file using another feature of build stream which allows tracking upstream tags and branches so the processes they have set up are pretty efficient and can be looked after even when someone's only got a weekend to look at it also because there's a shared cache some downstream projects that make use of the SDK one example being known can use the same cache so they can junction their build stream project to the free desktop SDK and they don't have to build everything from source again because they can pull the stuff that was built in this CI from the remote API cache to build in compilers in the downstream project as long as they're happy using the same versions so that's my first case study the second one is a little bit different this is some work on safety that CodeTink has been looking at for the last few years so Linux is everywhere these days it's in quite dangerous devices such as cars which is scary right you write a bug in the kernel and who knows what could happen so a lot of people are asking how can we use the existing open source tools that we have but in a way that can be proved to be safe and DCS the deterministic construction service is a stepping stone towards that so what the DCS achieved is a method for certifying the build process of some software now none of the ideas are new in fact a lot of the same ideas that I talked about in the beginning of the RE API with strong caching and knowing that if you hash all of the inputs to a build and sandbox the build then you can be sure you're getting the same thing at the other side what the DCS project was about is wording that in such a way that it can be actually certified which is important because if you're going to certify the result if you're going to test software you need to know that the thing you're testing is going to be the same when it's deployed so this is the first part of making Linux and cars fully testable and safe but it's an important part DCS is a design pattern so it doesn't in principle require any specific build tool but it does require a build tool or an integration tool that can do repeatable builds and the repeatable builds allow us basically to certify what we care about which is that we can update or modify the build tools being used and check that it hasn't affected the build output so in the case of build stream we can use a new version of build stream and we can certify that it's produced the same binaries as the previous version so okay we know that that's not introduced any new issues or if it has if something has changed we can analyse okay why has this changed the implementation of DCS uses build stream there's nothing specific to build stream and the design it's just at the moment it's the easiest path to achieve what's needed so other tools can implement this pattern and I would like to see that but I suspect it'll be more work than what was done for the reference implementation the reference implementation of DCS was certified back in June automotive safety standard with a very catchy number 26262 so that's kind of proved that the design pattern is sound and if you want to hear more about safety certifications specifically about the next steps like now we've built this software how do we actually test that the software is safe my colleague Paul Albertella is doing a talk on Friday on that topic so I recommend you can have a look at that and if you're not aware this foundation has lots of other projects going on around this area as well under the umbrella of Elisa which I've forgotten what it stands for the S stands for safety I can tell you that much and the L for Linux so this is my last slide I think we've got a little bit of time left for questions did we have I want to say obviously enjoy integrating please invest in your build an integration pipeline because your engineers will be sad if you don't and take a look at the remote execution API and take a look at build stream the 2.0 release is coming and hopefully it's an interesting build tool I want to say that the ideas in build stream are almost more important to me than the actual implementation the current implementation is nice and it's fun to use but really the ideas are what are interesting the idea of having strong caching and being able to trust that your build is fully repeatable that's what's interesting so I hope that next time I give this talk I can go back to my slide comparing IRE API clients and I can list some more tools under there that's my dream ok thanks a lot for coming and if you've got any questions we have a few minutes left five minutes indeed I see a question here do we have a microphone for getting a question ok so the question was have we tried building Chromium with build stream I think Chromium we haven't but Webkit we certainly have so the free desktop SDK includes Webkit and the results are on x86 great on ARM how long does it take on ARM so the experience is good I wouldn't recommend it for developing Webkit at this stage because you don't want to have to rebuild everything every time you make a change but for integrating something like Webkit or Chromium it works well the key I think is to have a fast enough build machine and of course that's something where both GitLab's CI and the remote execution API can help because you can kind of have one big build machine and then say ok this element needs to go on the big one another question here ok yeah that's a really interesting question to paraphrase whether you can import Docker images and the answer is you can there is a plugin that can import Docker or OCI images and I think there's another plugin that can produce images certainly it can produce a root FS that you can use with Docker import so you can the difference maybe from Docker build is because there's no network access in the sandbox you have to declare in advance all the things that are going to be needed so for example if your build process requires a C++ compiler you can't you'll install your C++ compiler and then remove it again you have to make it available at the start which is fine of course the way to solve that is to either install it in the image you import or you can also import tools from the free desktop SDK so you can junction build stream projects together and that's another way to get hold of developers and tool chains but yeah I'm interested to chat to you to chat to this after if you want more more information a question here okay so the question is what's to stop build stream handling both element level dependencies and file level dependencies which is something we've thought about quite a bit the main obstacle is if you want to accurately know all of the file level dependencies you have to have some knowledge of the existing build system um which can be tricky if it's a make file for example it's not easy to just interrogate make and say give me the dependency graph so the thing I'd say that the main thing stopping build stream from doing that is the variety of different build systems in use that there's no one way to get what are the file level dependencies unless you force projects to rewrite their build systems which obviously we don't want to do what I do think is an interesting idea though is combining the tools like rec which can distribute and cache individual compiles and integrating that into build stream currently rec wouldn't work because in the sandbox it can't speak to the remote execution API because there's no network access but we chose that so the build stream itself could open up a hole in the sandbox purely for talking to the remote execution API and you can then have these kind of two layers where build stream runs the configure script or whatever sets up the build and then from inside that sandbox the individual compiles are distributed and cached so I think that there is some interesting work to be done there yeah so the um there is a feature which is workspaces which goes some way to solving that but working on a large integration project and you want to focus on a specific component you can use the workspace open command and that will check out the source repo of the element and it will give you a shell inside the sandbox where you can run make run compilers modify the code and it even has some support for doing incremental rebuilds in there so I've been told to stop and that's made me lose my train of thought but yeah workspaces is the closest we have at the moment but it is kind of expected as well that if you're working on a specific component you maybe manually would set up a developer environment for that you'd run make and the tests separately and then you'd go back to build stream in the ci or the integration stage yeah okay well thanks a lot for coming everyone I really appreciate it and that's the end of the talk