 Alright, so folks are going to get started. I am going to introduce Kristin talking about introducing BuildStream, a distribution agnostic open integration tool. So let's give him a hand and say hi. Let me put this. Hi. So I've come to talk again about BuildStream. It's been a year and now it's time for the second talk. So last time I didn't go through this in the right order. So what is BuildStream, right? So before I start talking, what is this guy talking about, right? So what is it? So it's an integration tool. We're trying to call it an integration tool because we've been calling it a meta-build tool and nobody really knows what that means. So it's a tool which builds, but it's not make and it's not see-make. So it delegates builds to other systems and it puts it all together and you have an integration. It's an integration tool. BuildStream is a pipeline of file system data permutations. So it's completely abstract and it lets you run operations inside an isolated environment in a pipeline where elements have source inputs and dependency inputs and they create outputs. So the basic of it is you have file system data in, file system data out, and things that happen in between. Sandbox execution environment. So we guarantee that there's no host tool contamination or anything and everything happens inside a container. We do caching and sharing of build results. So if you have a lot of people building together, already built something, you don't have to build it. We'll download it if it exists and we reduce the amount of compiles on the developer laptop at least. And multi-purpose build instructions and metadata is to say that you can have a project that outputs various things with the same stack. So generally in BuildStream you're going to be working with a series of components and not just one component, but we have an example at the end with just one component and a way to distribute it, but you can basically have projects that have different outputs and using the same stack so you don't have to have various collections of build stories for your software. And it has a developer story, which is late in the discovery process. We found out that's actually the coolest thing that we do. Probably. So on to the show. So I'm going to introduce our BuildStream beaver. It was our youngest team member who we affectionately called Tristan because he's so bright and it's also his name who said that it should definitely be a beaver because a beaver builds things in a stream. So here I've launched an epiphany build but I can see that we really can't see this very well, which is unfortunate because I have a lot of these screenshots. You know, you can't see that, huh? Okay. Well, I'll have to explain as we go along. So what are our motivations for doing BuildStream? So I'm going to try to jump through this segment because it can be a lot of talking. So one of the things we wanted to do, of course, we want to save time. We're in automation. We're automating stuff. We want to reduce the amount of work that people are doing. So we want to kill cross-compilation because from what I've seen, there's a lot of developer hours in projects and these are really good developers that I've worked with in other projects and I'm like, what are you doing? They're spending their days writing these one-line patches and upstreaming these one-line patches to projects to make sure that they compile cross-upstream. And we think you only really need to cross-compile the tool chain and once you have a tool chain and you have a kernel, you can boot hardware and from there you can just native compile. So if you can do that and if we can provide a system that lets you do that, then we can save on all of those developer hours doing menial tasks and I'm sure they'd all love to be actually creating something fun instead, right? So smoke testing builds on new build host platform, some tooling which uses host tools a lot. They need to be vetted for a new distro so the new version of Ubuntu or the new version of Debian comes out. It's not supported by, say, Yachto or Build Root or something and you have to spend a month or something fixing the bugs and we don't want that either. Complicated setup. This more pertains to like historical build tools which have been invented like more than 20 years ago and we have to keep using them because everything depends on them but we end up like setting up OBS and these huge setups that are not easily repeatable setups but they work, they work well, right? But you can't just set it up on your laptop in five minutes and build something repeatable with that, right? Monolithic repositories of build metadata. This pertains to projects like Build Root. I don't know if probably everybody's distribution's dev room you all know about build tools, I'm sure. Build Root, Yachto, these kinds of projects which have metadata in the same repo for the whole stack from the runtime to your graphic stack and everything. What I've noticed is you have a lot of friction in integrating patches especially in the lower level of the stack or people generate a lot of breakage in the upper parts of this track by prematurely merging stuff in the lower level of the stack because upgrading GCC or upgrading G-Lib C has side effects which are forced upon the consumers, right? So we wanted to reduce friction and for that we have a feature which lets one build stream project depend on another build stream project so you can have completely separate projects maintained by separate groups and separate teams and higher level depending projects can pull in the changes when they're ready, test against them, report bugs back to the lower stack maintainers and such like this. We hope it's going to be a better workflow but we're putting it in action now and we're going to see how that works out for us this year. So yeah, we don't want tight coupling of build systems and distributions and distributions what I mean here is the payload, right? In a lot of cases, cross systems especially have a tendency of writing tools for the specific software that they're going to build and that makes it difficult to take a tool that has been made to make this distribution and use it to make a completely different distribution. So you're kind of stuck with certain versions and certain setups depending on the tools that you use and we didn't want that to creep in. Yeah, so what about the developers, right? So we generally think about in the integration crowd we think about the developers are those people who just like hack stuff and run it on their laptop, right? And then they send us a patch or they send us a new source RPM and we're supposed to integrate it and they said it's fixed, right? But they never tested it on the integrated system and we blamed them for it but it would have taken them all day to like get a rig and find out the process from building it on OBS, you know, finding the process for flashing it to a rig and doing it. We didn't really give them tools to make it easy for them so I think that we're blaming the wrong people here. Okay. So a little segment on what we're doing about these problems how we're tackling them so kill cross compilation, right? So for this we're we're looking into features which let the sandbox execute under a given machine architecture and we have an abstraction layer for that and it's probably coming in the next six months we might see something materialize here but it's designed for this and right now we only support native builds but we're going to do it, I promise. Yeah. So that will be just a way that your project can declare or a configuration option can say try to build this on that hardware and it may require that you run a virtual machine to provide an emulated environment on your own laptop or that you're connected to a build farm and you have real hardware or development boards to actually run the builds and they just act as slaves to the build process. No host tools. This is thanks to your builder who said we shouldn't have host tools and I said you're crazy but it makes a lot of sense. No host tools. We just cut a lot of problems at the bud if you can have a host tool on your computer there's no reason you cannot have the same host tool in an SDK or in a sysroute so everything is controlled this way. We track every hash of every binary input into the build and there's nothing left to chance by saying no host tools. Make it easy to run the production environment I kind of already covered this so the tool itself if it can run it's running something that should produce bit for bit if the source is ready to do bit for bit reproducible builds it should be doing bit for bit reproducible builds on any machine so the time it takes you to set it up is the time it takes you to set up a production environment. So reproducible and repeatable I've kind of been making a distinction here some people are telling me there's no distinction like emit but reproducible builds is getting your builds to be bit for bit repeatable so that every time you build it you get exactly the same output and you can raise your level of trust a lot in the things that are being built because you know things that are already tested and on the other side of the story is when you want to repeat this process 10 years from now and you have this definitions of everything that you want to build and how to deploy your appliance or whatever it is you're doing it's important to be able to actually repeat the setup of creating a build machine from scratch because well we want to keep our eyes on how repeatable the process is because we can't say right now what the world is going to be like in 10 years of course we'll want to see this along the way so we have processes and a way in place we have a technique in place that lets us cross compile a a base run time and kernel and boot that just to run a shell script without an init system or anything which will consume some build instructions that we've generated into a script which will let you at least bootstrap a foreign architectures machine to a point where it can at least run build stream and then you've completed the process and you can always do it again I'm going to run through this because I have a walk through for you instead so multi-purpose build metadata I create an app I have to choose if I'm going to build a flat pack I have to build it myself if I want to build a snap I have to build it myself if I want to build a Debian package or an RPM package usually somebody is going to do it for me but in any case it would be nice if I don't have to maintain like three different sub directories of my module to say well this is how my OSX bundle works and this is how my flat pack bundle works I just want one build instructions we want that so basically tools to debug inside a target environment you have to put tools into your runtime if you want to use them in your runtime but we have some interesting tools that I'm going to show off in this talk that cover that artifact sharing so I want to test GTK against the latest epiphany and webkit and see what side effects happen there I should be able to do it without rebuilding the whole world somebody's probably built it testing the changes okay fun so now I'm going to have to look at the screen here so when you write BST show oh my god okay let's just try to hold this right like can you see the you can't even really see the colors here right so we can see that there are colors you can see that there are colors but you know this is not helping so basically we're saying that all of these elements are cached these are cached in purple these are cache keys so basically they're a stamp they're abbreviated shaws which represent the inputs of a build which is our prediction of whether it should be bit for bit reproducible bit for bit exactly the same so this one is GST plugins bad has that shaw everything is built so I have a epiphany that's built and I want to hack on GTK so I'm going to open a GTK workspace here I'm saying GST workspace open and then the element GTK 3.BST and then a directory right so the GTK 3.BST is a file which defines how to build GTK and stuff about GTK and here I've basically checked it out at exactly the version that I was going to use to build according to this project data and now I have all the source code inside so just keep in mind that after doing this I've opened up emacs and I've reversed the GTK label angle so that by default it's at 180 degrees so labels should be upside down I won't go through that so I'm going to show it again and just see what is the result and hope that you can't really okay well this one is green it says buildable that's GTK and here waiting waiting waiting waiting here we have webkit here is waiting everything that depends on GTK basically says well it's ready to build and this is my pipeline this is what's going to have to happen if I run a build if I want to see epiphany with a new GTK I've got to build all that but I don't want to build webkit alright so let's try without strict mode strict mode is by default everything has to be built every reverse dependency has to be rebuilt when something changes which is what you would want in production builds but with without strict mode in place then we just say that I'm going to test something with the exact version of everything but I'm not necessarily going to rebuild against everything so I'm going to lose out in places where I need to do static linking for that I need to add extra sugar to my project to say well no I need to be rebuilt strictly every time because I consume static libraries but mostly this works in a Linux environment where everything is dynamic you can test so here what I wanted to show you and now that the lights are off you can see the cache keys it says cached for everything except for GTK which is buildable but the cache keys have changed color the dim ones are the weak cache keys they're calculated differently we're falling back to these ones we're going to use them and those ones won't be pushed upstream things that get rebuilt in this case GTK won't get pushed to a shared artifact cache because it's just a local workspace and this lets us just whip it up together so oh yeah I was going to build this but Buddy told me he had a fix in Gstreamer so I want to check that out and add this to my build first so for that I'm going to track right I'm going to track Gstreamer and we don't really see well but there's an info it says I found a new revision so basically inside the file which defines how to define how to build Gstreamer you have the upstream URL and you have a tracking branch and you have the commit shaw that you want to build so BST track will basically take the branch information and use it to derive a new commit shaw and say well let's update your project such that such that you're building the latest of this component or you can do this recursively or not so here I've done git diff and I can see this well there's this red mumbo jumbo which has gone away and some green fuzzy stuff which there commit shaws so can we build it so now we see before we go well now we're trying to build it right before we go we have this buildable which is also Gstreamer but it's also in non strict mode so we're going to build just two things and right now we have our cute UI who's clicking away here Sam really loves the UI and the colors so I'm going to try to sell it his name yeah we've got something pretty interesting it's just a little hack with the terminal ANSI escape sequences which lets us like reserve some lines and we have our rolling log which will just keep going and you can redirect that to a log file or something but while you're watching the build you have this which just sticks at the bottom and you have your different cues here so I've pulled no elements from upstream and there's three actually there's zero, two and zero red is failures right so you have your three cues and you can see the things are moving through the cues and we have counters to say okay what are we right now doing so we're basically scheduling everything that's going on here and we have a full timer so we've been going for one minute and twenty seconds on this slide alright not bad, not bad so I ought not to wait for it to build when it finishes build it will tell you yay we built and you know you don't need to know but what is interesting is that gstreamer finished compiling before GTK3 even got to make so GTK3 was running configure and gstreamer which switched to meson just just like speeded right through configure and like killed it in this race yeah so it looks like meson is a good idea after all so now I've built everything I've got my upside down labels I think and now we can run a shell right oh this is even worse oh my god well that was BST shell epiphany.bst right basically what we're doing here is we're taking all the different little builds that we have the different basically every artifact is like the make desk deer equals foo install output stuff and we have a deterministic staging order dictated by the dependencies of everything and we just whip them up into the same directory with hardlinks and we run some integration commands right so we update like gsetting schemas and stuff like font caches and things after staging everything we also do this before every build right so every build has exactly what you expected so here I launched epiphany from the shell right and you have your upside down loading here we don't render this with labels but we're online and we have this upside down text here right so we know that we did run epiphany with the gtk that we hacked to turn the labels upside down at least there's some problems which have been reported about like epiphany specifically it needs to access the web and in the regular testing shell environment we allow we don't unshare the namespaces or unshare the net or anything like this so we don't use a secure container environment but we still have to echo nameserver 8.8.8.8 into resolve.conf in order for it to work so there's some little tweaking, we're thinking maybe some client side configuration or project level configuration because this is how you can create a shell environment where my applications should run but that can use work it's better to go with a VM it's better to actually distribute what you're done right so you never know what's on the host anyway so pipelines build stream is elements and pipelines let's take a look at what some plausible pipelines could look like to give some idea what's going on so this one is generate an image so we're not going to do this but we thought of doing this right now in GNOME what we have for building GNOME components is we use a dbootstrap base which lets us just say ok I don't care about my system dependencies I just want them to be there and I want to try to build and run stuff so at least we have a specific version of we're using dbn testing and this is orchestrated outside of a build pipeline because you cannot really run dbootstrap or multistrap twice and expect to have the same binary result so we run it on a server continuously and when we add dependencies it gets re-executed and the result gets committed into an OS tree repo which is controllable so we have a couple of hacks like this to make sure we get policeable data into the pipeline so here we import from an OS tree repo and then we build stuff so we can import the base system and build stuff and then we can use a compose element a compose element is going to take what you want out of what you built and make one output which is usually a lot smaller than what you had when you just used everything that was in make install right so you want to decide do I want my debugging symbols do I want all the documentation and you can tweak your elements to say well this part should be in this domain you can hand hold all of this with glob patterns and such so once you have your compose element we send it to this x86 image element is basically a script that we have in BST external repository and this one does it basically does what WIC does from Yachto it was basically generally based on what it was doing so there's a lot of new there's well new maybe five or ten years recent enough options to file system tools like MKFS to allow you to generate images using the data that you have in a directory without ever becoming a root right so that in conjunction with a DOS partition and another user space utility and syslinux lets you splice partitions of an image into something that you can boot without needing a loopback mount or anything that you're not allowed to do on your on your user so that's one configuration this is another example of how we do flat packs and that's coming this week I think next week we were looking at a release last week the free desktop sdk project so basically this is how we would build the GNOME stack first we would import the free desktop which is already built it was previously built with Yachto plus flat pack builder for the 1.6 SDKs and runtimes we're going we're building them with build stream from the bootstrap up with the free desktop sdk project from the 1.8 here we just take the same build metadata that developers use on a day-to-day basis to build and test but we build it on top of this SDK and then we use similar compose things compose lets you we're going to like split out the locale extension of a flat pack I'm not sure how I think maybe a flat pack and knows what I'm talking about but you have like SDKs have different they have mounts of sorts you have locale, you have debug you have different things, split them out and here you just put some fairy dust which is like flatpack.meta or something which informs flatpack runtime what to do about portals and stuff permissions and such metadata that flatpack would understand but I have no idea and then you get a checkout that you can make an os3 repo and deploy it as a flatpack for today we have a demo it's not what I wanted at least it's half of what I wanted so in this demo we can see it a little bit better here so in this demo I'm going to build glade on top of a very specific Debian base and then I'm going to generate a glade package in that environment targeting the Debian version which I staged using Debian tooling which was staged as part of that import process from os3 so here we have a bit of a sample of what BST files look like this is basically an import element which says let's bring stuff into the pipeline using an os3 source at a given URL with a GPG key that I'm revisioning locally to check my download from that os3 repo that it's saying it's signed by the people I expect and we're using AMD64 and we have a ref alright then we have to do this something that's actually a bit more freaky because so the import we've imported that multi-strap thing but the multi-strap thing it doesn't configure because it's running on ARCH64 and it does like a bunch of different architectures so when you get onto a host that you know is going to build we stage it all and we run dpackage configure and we add a little bit to that to remove stuff that we don't want and to tiptoe around it's possibly failing without being really root and since we have the install root here that indicates that this script element so it's just a script element which runs commands it stages this it runs some commands and because it said the install root is slash the entire output is the entire sys root and it's just basically done a transformation on it and the output is what we want because that's an environment we know is safe to execute in so then when we build Glade this is an auto tools element because it's nice and slow unlike meson so it adds padding to your build time in the demo when you have like minutes to burn in the conference yeah so basically we just say I depend on this sys root which I just previously configured and stage me these sources so this is the sources I want to build I'm using an alias to like http dot dot dot slash gnome glade.git right and here we have some interesting stuff the public data so basically everything we've seen so far is either core build stream format or configurations that are declared by the plugins which implement them and that's all validated so if ever you specify a typo or invalid value or something you're going to get an early abort but the public data is completely free form and what's special about it is in a pipeline such as this every element can see the public data on all of the elements it depends on right so a given element can say this is some metadata that a later element can consume and it doesn't have to be bound to any API constraints it's rather free form so this is actually informing this element the d-package deploy element what are the things you're going to need to know to be able to make a package out of me right then we move to this is a most of this happens in python in a custom element so we don't see a lot of script here but basically this one the d-package deploy element is going to take the input artifact so that's part of the d-package deploy's custom element configuration which says tell me what you want me to package tell me what part of my dependencies is the base because he has to decide well I'm going to stage all of this dependencies and try to run on it right surf on it and this other dependency I'm not putting it there I'm just going to build it so it puts it in different parts of the sandbox and one of them is the execution environment and the other one is going to do something with it's going to package something yes so I have a feeling that we might be able to see a bit better here right better just okay what do we have here so this is the project right so we have that key which verifies the input and here we have our elements and we have our cached elements nice and big and they're already built so I can't really build them for you right now maybe maybe I could we'll try that after if we have time so look at I'm going to check out that last element from the slides we're going to check it out in a location and that is there and what this the package element did it just went through the different domains that we use for there's different categories for stuff and we just said well I'm going to take everything in the runtime package and put it in a runtime it's really stupid and straightforward but it can be tweaked and those categories of files those split domains as we call them can also be configured so you can do more fancy stuff and it's a work in progress which is why it doesn't install cleanly on my OS so I don't have a glade right ah and I'm doing force all let me see what I think what happened when I did without force all it said you don't have yeah so architecture we didn't sort that out and there's also missing description need some description in the metadata huh that's just a warning okay any was a bad mistake you know um there's a well that's really bad don't do this right you know it it works right except that the display is very very small but basically you can you can use it and it's running on my host against my host libraries and what's interesting is that I'm running Debian stretch and that was so yeah it's not I should have built it against Debian stretch but I had only a testing run time available at the moment so but what's interesting is you can target specific versions and you can develop your package for fedora or for Debian or for whichever on your gen 2 machine and actually the host should never be relevant in what you're building right because your host is always just a moving target so I think that's pretty cool mm-hmm and right now we're getting into the soon we're going to have Q&A but should I let's see um now we said architecture AMD64 right yes huh okay so I'm curious to see if I kill the warning with this and oh my okay so we failed our 80 character terminal with Sam I'm sorry this is horrible outrageous no no no no this is horrible I know I know there we go but then nobody can see anymore oh ahh yeah okay okay okay and that should be the same yeah okay so it's time for the Spanish Inquisition yes so um when you're building um binary packages for um Debian here Dora or whatever am I right in saying that you're skipping straight past the source package stage yes I'm going directly to essentially a glorified table so you have a Debian packet a Debian a .dev here that has never had a .dev city or in Dora land you'd have RPMs that there just has never been a corresponding SRPM yes basically we're yes so we want to use the same build instructions right and we want to treat the packaging systems as only a packaging system and not a build system right so these packages can only ever be like released uploaded whatever to a distribution that has like made a decision that this is an okay way of working yes yes yes I had a question about I realized this is all generic but in practice the compose stage and the image stage once you have caching of the artifacts those stages tend to be quite long and take a lot of time especially if you were going to build source package unpack that and sort of have all of this package could you comment on how we could improve those stages so you actually get the minimal amount of work for these tiny changes okay so or just explain in practice how you what happens in those stages for example well unpack all the oh in the beginning to create a system to build on top of no no in the compose and image building stage the last two stages you had the bootable system so there in that example we don't use packages at all right but there is another project that we have which does accept it's not using build stream yet and we have something similar which creates the packages for our p.m. using dash bb skip the build steps and it will use rpm install dash dash sys root into a place and it will do that but that's especially because it was for an organization which was using rpm and using me go image creator for the one package I guess what I'm looking for is can you start with all the stuff that hasn't changed like would you integrate something like to get the minimal amount of work to change create a root call system based on what you there is similar those kind of changes on that file level that's very difficult to do yeah that's on the file level we don't have that really no it's one shot compose for compose it's very doable and for the image constructing the image is very tricky because you have partitions and you need to splice them but yeah it's difficult anybody else yes how do you get your initial bootstrap environment so like enough compiler to compile the rest of your system and that kind of thing so Havia and Adam are working on the free desktop sdk project and they they are doing a project which does the bootstrap and I'm not sure where they are with it yet but basically yeah I'm just not sure about what's the input going into the cycle but it should be circular at this time so you basically have glibc gcc busybox and I think you need GNU sed to get through the build of gcc so that's completely off of an alter buildstream is it well buildstream has been doing these bootstraps since the beginning because it's very important to us to know that we can right but yes that's a separate project yeah the way it works is the first time we built it it was from a free desktop sdk 1.6 that we were importing I believe and now that we've built it once we commit it and then and the first stage is cross so the first stage you generate an output with a target and you never execute on the build any of the binaries that you generate but then you can import that on a foreign architecture and continue the build from there yes how many how many projects have been like imported to have the metadata and rebuildable any of the gtk so during the past like since guatec we've had a system going on where we were building from conversions of jh build so as far as gnome goes there's everything that was in jh build sends the apps because release team wants to make the distinction where apps go into flatpacks and only core gets built separately so those are currently being built with build stream and where havee should get ci online next week so that we have auto builds asides from that other projects which cannot be spoken of but yeah I'm hoping to get to a point where I can make flatpacks of glade and like have it myself and we're gonna talk with alexander larson about having support in flat hub as well yeah iñacio bsd that's what you mean like not other architectures other platforms yeah so we have a unix backend because right now by default on linux we use bubble wrap for the containers and we use os tree for the artifact cache because there are optimal solutions for the problem and we have a lot of unices out there which just don't have that and doesn't really make sense to like make os tree supported I don't think that's gonna fly for every different platform like unix so we have a unix platform backend which requires root and you do a charoot and you use tar balls instead we have somebody I forget his name last week came in and started testing building gnome on bsd and he's got some some like hack patches that we can inspect and use it to fix it for bsd but we want to make better platforms for specific targets using the technologies which exist on those targets windows is a long time coming I think it's not next week thank you very much that was our last question