 Okay, hi everyone. My name's Chris and today I'd like to talk to you about creating Debian-based embedded systems in the cloud using DevOps. I'm an engineer at Collaborer and I work on custom Debian distros for cloud embedded and PC applications. I work on continuous integration of these distributions, packaging for these distributions, over the upgrades, tooling and also learning some rust as well. So today we're going to talk about why you want to use Debian as a base for your distribution over certain things like build route or Yocto, some of the internal design decisions of DevOps and how to use DevOps. DevOps relies heavily on YAML configuration files. I've left a link here for a tutorial if you're not familiar, but it's very simple really. I'm also going to go through some future plans for DevOps and hopefully at the end if we've got some time I'll take some questions and post some answers. So first we're going to talk about what is a GNU slash Linux distribution. Basically it's a collection of software packages or packages of software created for that distribution. It's also a collection of like-minded developers that work together on to the same goal and each of these distributions has a different goal in mind. Some of them might be financial, others might be social. So Debian and Ubuntu use deep package or apt and the Debian goals are mainly social. Red Hat and Fedora uses RPM, YAML to install packages and their goals are mainly financial. So they're looking for stable distribution for business, really, for enterprise customers. Everyone's got their own preferences you pick and choose with what you like really. In the end the software you end up with is pretty much the same and you may say it doesn't really matter that much. It just happens to be that my personal preference and a lot of collaboration employees personal preferences Debian. So why would you want to create your own distro? So most hardware development kits these days are supplied with a general purpose distribution for your own evaluation. So these can have old kernels in old packages, outdated configuration, insecure stuff really. So normally you want to use the reference distribution to evaluate on the hardware and then most of the time people are stuck with what to do after that, how they port their software to these platforms. You also may want to create cloud images. So for instance AWS, Google Cloud, DigitalOcean and you want a nice save base for all of these things. So you want to create your own distro rather than use something off the shelf and then have like a custom script to install everything that you are going to want to use. Normally in these distributions that the supply tree evaluation and cloud images there's a lot of bloat that isn't actually related to your final application. As I've already said there's outdated and insecure packages and lots of incompatibilities really. Obviously there are applications where this isn't a problem but I mean on the whole really if you buy one of these cheap dev boards really you're going to get something that is outdated and insecure really. So it would be nice to create your own distro to put onto a board like that. But your own distro or your own flavour of going to social access is a lot of work to maintain. I mean there's a lot of packages you've got to create, a lot of work you've got to do to pull in security updates, pull in packages like it's just a lot. So my suggestion is you don't need to reinvent the wheel. You want a base on proofing technology. So that brings me on to the octane build route which is proofing technology used for embedded platforms. Excuse me, now I would say that using octane build route is good but only really for a sort of hardware like these cheap embedded boards from China or your own custom hardware. Using octane build route is quite difficult when you want to do something generic like make a cloud image for AWS or make an image for laptops or something like that. Also, with octane build route, create a custom distribution or root FS in the case of build route, it can become a bit of a maintenance nightmare really because you've got all of these packages and if you deviate from the upstream at all really then you've just lost your own security up group path. All of the packages are compiled on your machine. So basically the octane build route, they download a compiler source, compile a compiler and then compile each of the packages. This can take many hours and you really need some heavy hardware to be able to do a good job. There's a high learning curve to using these tools. I mean it's not something you can just tap make in and you end up with an image that works. You've really got to invest a lot of time and effort in creating the image and customizing it for your purpose and your target device. And really my problem with the octane build route is why make your life hard. I mean things are out there ready to make your life easy. So we all like an easy life, don't we? So the unit's traditionally seen as a desktop operating system but in recent years a lot of effort has gone into enabling embedded targets like the Raspberry Pi for instance and other cheap embedded boards. There's a lot of work that's gone into enabling Debian on RISC5 as well as ARM and this work continues. Debian was released in 1993 and it's quite widely used. I mean it's in the destroy watch top 10 list. I think it was number three a couple of years ago. It's dropped down now because of some Debian based derivatives of push up high like Ubuntu and elementary things like that but still really Debian does dominate. Debian and its derivatives do dominate the destroy watch list. Thousands of volunteers shaped Debian into what it is today and they will follow the Debian 3 software guidelines and software social contracts which have also been shaped to make sure that Debian only contains things that are worthy of being included in a kind of decent operating system I suppose. There are well over 50,000 official packages. Most of these are popular. There's also quite a few libraries. All of these are really easy to install. You can just do apt install and then you know the name of the package like Firefox or LibreOffice or whatever and everything ships with a configuration that makes sense for most kind of uses. There are different ideas of what kernel to use. There's a HURT kernel for Debian. There are different packages for things like desktop environments and there are also packages for different web browsers. You can pick and choose basically what you like to make your system the way you want it. There's a really great community around Debian. There's lots of tutorials, there's forums, mailing lists that are fairly friendly. It's easy to get started. Some of the things like bug reports are fairly archaic when you have to write an email but all of these processes because they've been around for so long they are a nice process that everyone understands how to do. I would say that it's fairly easy to get started with Debian. There are at least three branches of operating system for Debian. They're stable, testing and unstable. Stable at the moment is currently known as Buster. Testing is known as Bullseye and unstable is always known as CID. Stable and testing, the names of these change based on the release but stable testing and unstable are kind of like symbolic links and these official names. If you stay on the stable branch you'll always update to the latest stable. There are timing security updates in Debian so basically as things are improved upstream they generally trickle down into into Debian packages. I mean there was this recent Bluetooth bug that was found fairly recently and I think it's taken about less than a week or so for that to trickle down into Debian stable as a security update. The community fairly good on that. There are also paid developers that work on the security and the vulnerability updates so you can be assured that these will be trickled down really quickly as and when they're needed. Most importantly using Debian as a base will allow you to work on the most important part of the project which is your application. You haven't got to waste time worrying about all the underlying packages. You just pull from someone else who's done all the QA testing. It's quite a nice experience really in terms of developer time. I mean I've heard of some developers spending half of their week worrying about security updates and the other half of their week working on the application with using a distro as your base such as Debian. You can really reduce that down to working on your application for probably 90 plus percent of the time. So as we've already said there are stable testing and unstable releases of Debian all kind of branches and each of these branches has a name. So Bustable, Zine, Cid which is symbolic link and there are symbolic links to the stable testing and unstable. All of the bleeding edge software is packaged into unstable. As soon as the package has been built for unstable it's released into the wild and then developers usually run the unstable release and this makes unstable QA staging area for the testing release really because packages trickle down from unstable into testing around two weeks after upload only if no major bugs are reported. So the developers who are running unstable essentially do all the QA testing on the packages for about two weeks before they're pulled into the testing release. So really testing is about two or so weeks and necessarily critical bugs behind unstable which makes for quite a nice fairly stable release. So stable release is frozen for about two years. After the release is frozen only security updates and minor releases of packages are included in the release. So you don't get the latest and greatest software. You can enable the backwards release which brings some of the latest and greatest packages down that certain developers are interested in such as the next kernel. I mean in Debian stable the next kernel is quite old. So if you're going to run stable I'd suggest using back ports to pull down the latest versions of packages. You wouldn't want to install or mix packages from different releases because this can cause dependency on mess with things like libraries. It's just generally not a very nice way to do things. I recommend using testing unless you're very brave. I mean I use CID, a lot of Debian developers use CID and don't really see many problems but when you do get problems it's kind of a big problem to sort out. So I would recommend testing for that extra bit of QA. Some people say that stable is quite old. I would disagree with these people and suggest that if you use stable along with back ports you can have a fairly up to date modern system which is very very stable. Also these days most software is packaged using Docker or Fatpack. So if you really want the latest and greatest software I'd suggest using Fatpack or Docker to install packages really. So some advantages, sorry some disadvantages of using Debian. At the moment Debian only cares for system D. I mean my view here is that system D is the way to go in the future. System D is maturing really nicely these days. But Debian is changing now to let people try to choose their own way of starting the system. So excuse me, which is quite nice. I mean the thing with Debian is changes are seen quite conservative. So changes aren't necessarily implemented very quickly. They're very conservative in new technologies. Packages are built with Junip C so you kind of can't run Debian on a microcontroller which I think these days you would want to I think using system D and Junip C is quite a nice kind of combination for medium sized boards. Originally Debian was designed with desktop server use in mind but these days there's been a lot of work on embedded platforms and you can quite happily run Debian on a Raspberry Pi or any kind of ARM risk vboard. It just really depends on the current boot loader which are being packaged for Debian. So documentation in this area would be it could be improved vastly but I think that what there is out there now is very good. I mean as I've said already there is paid security support but it's quite limited. I mean you really you really are thinking about the select few that would actually want to pay for security support but it is there. It's just not something that could really be relied on. Debian also has a quite slow release cycle. I mean releases are basically done when they're ready which averages out to about every two years but this really isn't necessarily a bad thing since SID contains lots of new releases so things are nice there. So now you know how the background behind why you'd want to create a Debian image over something like the octa-table build root and now I'm actually going to go through how you would create your own custom Debian image or Debian file system root FS whatever you might call it. So the first step you want to do is create an image. You'd want to use something like DD here to allocate a file of say four gig out of the big the images you want to create all of zeros. You'd want to use a tool like fdisk or party to create a partition table on that image. That could be GPT, MS, DOS whatever your hardware really requires for x86 platforms using things like e5 boot. You'd want to have a GPT partition but some embedded boards are quite fussy on what they'll boot. Really the first stage boot loaders are quite fussy. Then you want to format the partitions using the MKFS tools. Most cases you want to have a root file system for your root around sort of gigabyte or maybe more. You'd want any other partitions which are required for the embedded platform that you're running on. So for instance if I would need an ESP platforms like Rockchip they need kind of the GPT partition to U-boot and separate partition for Kerns living. All of that is really implementation detail of your particular system. After that you'd want to mount the partitions in a loop device using something like kpart x which is always really a problem. I mean sometimes loop devices just fail for no reason on certain machines. It's not really that reproducible. After that you can change into that mounted image and you can use reboot strap to create a basic Debian file system. Deboot strap basically downloads the Debian packages which are required for a very very minimal system and installs them and then eventually you end up with a system that's got some basic tools in it like a shell kind of system D. You've got apps in there as well. So then you can expand your system after that using tools like app to install other packages. You can install custom packages using dpackage. After that you're going to want to set the hostname of the system, add user accounts, other configuration, things like that. And then after that you want to do the cleanup stage where you unmount the image and clean up all the loop devices and then you're going to want to do something like compress the image to a B-map file if you're going to be flashing to an EMMC for instance. I'm going to save the build log so that people can see what's happened in a reproducible way. And this is nice it works. There are some tools out there that do these steps like this for instance spindle which is for Raspberry Pi and it works until it breaks. Really is the problem and we've had this with loop devices. I mean they're so fragile and in one kernel version it will work the next it won't. They're also issued with loop devices where if you're building lots of images in succession eventually the whole kernel will lock up. This is just a problem with using loop devices there's no real fix for it. So there are lots of tools out there that do this as I've already mentioned the spindle. I've linked here a presentation called the many methods to build a Debian image which include lots of tools and how to kind of use these tools to create Debian images. Interestingly DeboS isn't on the list yet. I'll have to get in touch with Rick who to try and introduce him to DeboS or maybe if you're watching this presentation now that would be also quite nice. But basically the conclusion of this presentation is these other tools are very specific purposes they're not quite generic enough. So DeboS was designed to be inherently more flexible than these tools and also robust against the random files like issues with k-part x and loop devices. Excuse me. So DeboS generates a complete distro from one configuration file and that configuration file can be stored in version control which again is quite a nice thing. Changes in versions of DeboS don't really matter so much because DeboS only handles kind of generic things as we'll come on to in a minute. Other tools the version of the tool that you use to create the distro can have an effect because some of these tools have certain features which are enabled later down the line. So DeboS is constantly being improved upon by clabra and the main reason that DeboS was created was for the Apertis automotive Debian derivative where the developers were having such problems creating these images using the steps up defined already with loop devices they just thought right we need to create a complete custom bit of software to do this. So I also think that getting started with DeboS is actually very quick compared to some of these other tools. These other tools have got their own quirks, intricacies and things to learn. Obviously DeboS does as well but it's a lot nicer learning DeboS than something that the tools. So DeboS runs a VM on your machine using a library called Fate Machine. Currently we use KVM a kernel virtual machine to create this virtual machine but as we'll come on to later we are improving this. So the disks are attached to the VM itself so there are no loop devices involved and attaching disks to a VM is very very commonplace I mean every VM has got a disk attached to it so this works very nicely. Excuse me. Under DeboS the steps to create your image are contained in a recipe file and this recipe is translated into commands which are ran inside the virtual machine and these recipes and the commands are known as actions which abstract the changes to the files and the commands that are ran in quite a nice way. If there's no action ready available I'm going to go into my actions are available in a second. If there are no actions you can basically just run a shell command or a script inside the VM it's quite easy to do that. We also welcome action ideas as well as action patches upstream and when things go wrong it's really easy to clean things up all you do is you kill the VM and everything's gone away you can restart from how you were and it works very nicely. So the images are reproducible on your PC as well as the cloud so basically any way you run the tool you'll end up with the same kind of output. So there are loads of people using DeboS. A purchase as we mentioned is a Debian based platform for automotive and consumer use and it uses DeboS to generate reference images for lots of different platforms. They do cloud images, they do images for Raspberry Pi 3 and 4, they do lots of strange reference platforms for IMX6, the list really is endless. CalCI which is a Linux Foundation project uses DeboS to generate the root file systems for continuous integration of the kernel and it also uses DeboS to create root file systems for lava health checks which is used under the hood of CalCI. Redaxa, I hope I'm pronouncing that right. They use DeboS to generate reference images for one of their boards called the RockPro PX30. Hopefully we're going to see them use DeboS more, we've been working quite closely with them to introduce their developers to DeboS really since they were using one of the build scripts I was talking about before where things just go wrong quite a lot of the time. The Mobium project which is Debian for mobiles is a project by one of my colleagues, Arnold Ferraris that uses DeboS to generate images for PinePhone, PineTab and the LibreM5 phones and I've heard there's quite a lot of interest in Mobium recently so that's definitely a project to check out. Plasma Mobile, my KDE project, they use DeboS to generate reference images for their Neon platform. Gemian, I hope I'm pronouncing that again right, is Debian for a PDA Cosmo communicator. They use DeboS to generate images and Rouped Useable Builds use DeboS to make sure Debian packages can be independently verified by creating images really for base images for the reference build system. There are plenty of people using DeboS. If I've not included you on the list and you're upset about that, please email me and I'll actually do this for the next time I make this presentation. So that brings me on now to what actually is DeboS. So the core of DeboS is written in Golang and there's no need to know go only if you want to write patches upstream to us which again would be very kind of appreciated. The reason Gol was chosen is because it's still enough to see and there's a low barrier of entry, I mean you can learn enough go to start writing patches probably in the evening. My biggest problem was working out how to compile everything and where the source was kept but we've got documentation on how to do that now. There's a separate library called FAKE Machine which handles the virtual machine and that abstracts all the virtual machine stuff into a separate place so DeboS is kind of runs on top of that. DeboS and FAKE Machine are in Debian stable testing and sit so you can download those for your A&D 64 host and run on your Debian system. There's a Docker container for DeboS which you can run on any kind of Linux based system as long as you've got KVM available and you're in the right group or finally you can install DeboS from source on other operating systems like Hatch. There are certain intricacies and things to be careful of for instance on Ubuntu your kernel image is read only by root so you can't read the kernel. You can't read the kernel as a regular user which basically means you can't use you can't bring the kernel up inside the VM which is basically an on starter so I would recommend for these platforms using a Docker image because the Docker image contains kind of everything needed it contains the kernel binaries all of the libraries and they're all kind of tested together so you can run whatever kernel version you like on your system so long as you've got KVM available you can run DeboS through the Docker container. So I've kind of alluded to this already but DeboS recipe is a YAML file which basically defines the steps the software needs to go to create the image. YAML is fairly simple and can be version controlled in the same way as any other script really so just a git repository is what I would recommend and as we'll see later that can also help for things like continuous integration. The recipe consists of header which has got metadata currently this only consists of the architecture you want to build for and then after that there are multiple actions which are chained together each of these actions has their own properties again I'm going to come onto these actions in a minute what they are and how they all work. Comments can be put in the file they're prefixed with the pound sign if you're American or hash if you're English. The YAML file is pre-processed through a templating engine so variables can be passed from the command line and there's also some basic scripts in there so if our statements can be used and together with variables this is quite powerful you can basically choose different packages to install based on for instance what architecture you're building for or you could have different variants of your image which have got different packages the list really ends. Also the other nice thing is recipes can include other recipes so you can abstract things off into their own separate files. So here I've got an example if you want to follow on how you can copy this down the simple ospack.yaml you can see the description here includes the fact that this recipe will create a table of a basic devian system so the architecture is the first thing to find which is NB64 this could easily be on HNFR64, it's risky to see any basically devian supports architecture and then after this we've got a list of actions you see there's the action center and there are actions sorted and arrayed each has a prefix the little dash and then four actions here one's run after the other so basically we run the bootstrap to set up the basic system we then run apt to install a package and then we run the command to set the hostname and then we pack everything up into a table basically and if you're following on at home you can basically install docker from your system and then you can run the docker image and run the YAML file I've just spoken about and basically at the end you see this output and you get a file called simple ospack-tapel so I've had to remove some of the output here because otherwise it won't fit on the screen but basically you get all of the standard out and standard error of any command that's ran in your shell which makes it quite nice because you can just if there are any issues you can just see on the screen what's happened you can see exactly what's being sort of what steps were taken so basically here again I've just shown the recipe on the left hand side and the output on the right and how each action is shown on the right hand side and the image basically takes about four minutes to create which is not that long I mean it gets a little bit longer when more things are done but I think you you can have an image in probably about 20 minutes really for the featured image which is quite nice you can also run os under gitlab we use gitlab ci a collaborator to create images for clients and you can basically set up gitlab ci using yaml again and here is a very simple gitlab ci file which runs the same example before and I think here we've got less than 20 lines and every every push to that repository is built using this continuous integration pipeline you can set up schedules and kind of all sorts it's quite a nice system really but I've also included a screenshot here of basically what you see there is a little tick at the bottom for each of the stages which are ran you can see it ran for 10 minutes what commit that relates to and also if you get failures gitlab will very kindly email you which is quite nice you can also see the output of the commands which are run this is stored so everyone who's been given permission can see it next I'm going to come on to the actions that are available for you first and most important action really is deboostrap which sets a basic devian system in the target you can choose where the packages come from they come from devian a bantu any devian derivative really you can use the suite which we've already spoken about earlier so that could be stable unstable seed or anything bantu suites whatever you can choose the components which go into that so you've got main contrary but number three for devian but you could choose whatever component you like also you have the variant parameter if you do not include base you'll basically get full devian system with lots of small packages but we tend to use mid base because that really is the minimum system that you would have which just has the essential packages and apps so you can install packages basically then you have the apt action where you can install all packages and dependencies it just calls out and handles dependencies the same way as if you're calling apps on your system then you have pack and unpack actions which pack compresses the complete target file system to a table unpack uncompresses the file system table into your target so we use these actions so you can create an OS pack which is used multiple in multiple recipes so you can have one OS pack which is included in three different image recipes for instance and at the moment tar gzip tables are the only supported compression type which brings me on to the image partition action which allows you to actually create an image there are lots of parameters here as you can see but basically it creates an image and the partition table inside that image and formats all the file systems there are lots of different types to choose from there's the standard x2 3 4 fact 32 we've also got some interesting ones like butterfs f2fs we also welcome patches or ideas here if you want to add more file systems we also support the none file system type so you could basically then just write your own stuff to that partition no file systems formatted there this image is then attached to the VM and then if you set up the mount points you the mount points are then mounted inside the VM and also from this later on fs tab will be generated from this list of mount points which is quite nice you can only use the image partition action once per recipe and it just calls standard tools like party mkfs fdisk under the hood to do the creation of the image so we've got the file system deploy action which is usually used after the partition action which basically copies the whole file system onto the image usually by default the root file system isn't stored on an image it's stored in a tempfs usually 2 gig by default and then we use a tempfs because it's a lot quicker than writing to disks in some cases so usually you'd want to do all of your customisation to the image or file system in the tempfs and then right at the end deploy everything to the image just because writing to disk is usually a lot slower than writing to memory after you ran the file system deploy action everything is executed on the image basically after that there are also some helper functions here set up kernel command line and a pen kernel command line and setting up the fs tab which basically creates fs tab file from the image partition action and it uses the block uid for this which is a primitive i mean in some cases you want to use the block label rather than the block uid but you can always overwrite the fs tab yourself and in these cases we let you set your own fs tab also you can create the kernel command line file which basically is the parameters which are passed to the boot load of setting up scripts basically it's implementation detail so if you ever use docker you'll know what the overlay function does but the overlay action basically copies files recursively from your host system into the target file system the source directory is relative to the recipe file so you can include files it from the same directory as your recipe file into the image this is useful for including things like configuration files or little bits of scripts or something that you won't include in the image the permissions are preserved as well so this is a really handy function to just copy stuff into the image really um we've got the raw action next and the raw action basically writes an image to the partition or the image itself this is used for installing boot loaders to the image or copying pre-prepared images to a partition basically and under the hood just does something like dd to install the image into the partition we've got the run action the run action basically allows scripts or commands to run inside the virtual machine you can run these on the fake machine itself or inside the target root file system or they could be ran after the virtual machine has been shut down your scripts must be executable and they're sort of relative to the recipe which again is quite nice because you can store these scripts on diversion patrols well as the young recipe so everything's all in one place you haven't got to dig around in lots different places to create your OS image um if the command fails then basically dev OS will fail as well with the standard out and standard error of the command that's ran so this is quite nice because if any failures do occur they don't they just get trapped and if you pair this with something like git lab ci you can easily get a a ping over email when something fails and then you can just go in and fix it at your leisure really excuse me so basically commands and scripts are mutually exclusive so you can run a command or a script so this brings me on to the more complicated side of things now like variables and scripting so in this example here we have three variables which are passed in from the command line the architecture is sweet and the image name and we also have some defaults so this is basically just a quick way of showing you how the variables are used in our third variable the image we basically use the suite and the architecture to pre-prepare the file name of the image which is using printf which is quite nice um and these defaults can be overwritten from the command line using the dash t or template variable and you can see at the bottom there's a couple of examples of how to use the architecture and suite to change these variables from the command line okay so then we've also got if-else statements so you can see at the top we've defined architecture as a variable with a default of arm64 and we could pass that in through a template variable and the example uses if-else to check if the architecture is equal to a certain architecture and install certain packages depending on what architecture you've chosen this is the sort of simple scripting you can do it may seem simple but it's very very very effective so you could have variants of different image types for instance a minimal image or a maximum image which has got lots of different packages in say for development and then for your actual release you would have different packages in or for your debug release you would install an ssh server and for your production release you wouldn't install an ssh server things like that are very easy to set up using these if-else statements this also brings me into the recipe action which allows you to include recipes inside other recipes which is is quite powerful because it allows you to abstract things elsewhere so you you can also pass variables in here so for instance you could use this for components so you can have a recipe component that installs LibreOffice as well as setting up all the configuration files there or when you can have a recipe that installs the debug version of your stuff or application and then just hides all the implementation data of that in a separate file which is quite nice so some more examples more fully featured examples there's a basic Raspberry Pi 3 or 4 image which is a good starting point it's basically one file and some scripts which create an image for your Raspberry Pi most people have got Raspberry Pi so if you're interested in making a Raspberry Pi image that's completely custom I would look here to this example. Apertis these examples are not for the faint of heart I mean these are quite in-depth they create lots of different images for Raspberry Pies they create images for other platforms as well all from a common base and there's a lot of scripting if else statements in there so I'd recommend checking that out if you're interested in some quite heavy-going examples really other than that there are the examples which I've been to earlier in the presentation about who's using DevOps so check out there so the future plans we've got for DevOps include improving the documentation adding some example recipe files to hopefully get more people using the tool at the moment it's really been a lot of internal use and not many people outside the company have really heard of DevOps so really we want to create some more recipes and get more people using it really so that's the biggest the biggest goal we've got really by the end of this year we want to add an automated testing to make sure that each push to the repository that we make or any new patches that we get don't break any old sort of functionality we're going to do this by adding in some self-testing recipes that is ran sort of on a scheduled run as well as nightly sorry as well as on every push to the repository we're adding user mode Linux support and this kind of goes back to the original point we made about DevOps running with KVM so our GitLab runners have been designed so that KVM is installed on them but normally if you run GitHub actions or GitLab runners they don't have access to KVM so we want to add in UML support here user mode Linux is a bit slower than KVM but it works quite nicely for GitHub actions and GitLab self-hosted runners this kind of is a game changer because we can then build images on GitHub the example images that or the example recipes that we've created can be installed and tested by anyone just by clicking the download button and then people can build their own images on GitHub as well so that would be a game changer in my opinion we want to add in some more useful actions so at the moment there's no real official way to install a devian package from a file we want to add in that and some other useful actions again the actions that we've got are fairly generic and should work in a lot of cases if you spot an action that you think would be quite useful please feel free to open an issue one GitHub and we'll talk about it there next year we'd like to add some support for Arch Linux and other distributions so that's basically running devOS on Arch as well as potentially creating Arch Linux images there's a lot of interest inside the library for creating images forever other operating systems in the same way that we do for dev devian and again more examples and documentation I think after that's all in place we're ready to release version 1.1 and after that I think we want to fix all the bugs because I'm sure there are some we run devOS kind of at least a few times a night with the purchase and we're picking up bugs and when they come but everyone always has their own way of doing things so we're always open to fixing bug reports and discussing on our issues list so please open an issue on GitHub if you do find anything so then with that I'd like to say thank you for attending and I'd like to take any questions um Clapper are also hiring and there's a link on the screen now so oh it's a shame that we didn't get to meet Ian Dublin this year Dublin is one of my favorite cities this uh t-shirt I'm wearing here is actually it's on Dublin um so really with that I hope you and all of your family are staying healthy and safe and thank you again for listening