 We are just in time to start the workshop. So I think you could start the transmission right now. Okay. So welcome to the workshop, Debian in the modern pipeline. For this workshop we did a special setup in LAN because some technical restriction we got when we were doing the workshop. So we decided to do it in a LAN to be sure everything will work or the most of the workshop will work. So for the people that will really do the workshop and not just follow me in the screen, you got a small ticket and that ticket you will have an IP and where are you using an IP address and you will have a user and password. Okay. So for the whole workshop we will use that user and password so keep it with you. And when we ask for you that one in the documentation please use that one, do not use another one, do not create a new account because if you do that then it will not work and we will lost all the time we did doing the setup. So please go there, follow the instruction in the small ticket. Please, I think it would be a good idea to turn off the Wi-Fi to do not have a network conflict. By the way, if it's working it will not be mandatory. Okay. So during the time people will start basically coming to here but using the LAN. I want to say to the people that is using them by Internet this is not a real domain, it's in use in New Orleans so if you go there you will get nothing. After the workshop we will see if we could get a way to do like the whole repository public in another way. For now we have just the LAN setup and basically the idea is that the people doing the workshop supposed to come into this website and please click in here checking before to do any other step. Okay. So do not do an account, just click in checking and we are here. So basically there is a setup we require for this workshop. Please, that you will read as a ticket in red is the small paper ticket you got with the username and password. So if you are supposed to be here everyone that is in the workshop are in this page right now. Okay. So please follow the instructions. If you have any questions just ask me or ask Jorge. We will be happy to help you. Okay. During the time people will do that I will explain. Basically we have a local GitLab installation and people will do a small setup to get access into that GitLab and also people will do a basic setup like doing a runner installation and cloning some repositories and installing some software adding the SSH key into the GitLab. That's like in the general way that they will do. Okay. So I will explain first who we are. So basically the workshop is something we did with Jorge. Jorge Ernesto Guevara Cuenca and me. My name is Freddy Pulido. Okay. So we bought a volunteer at Fundación Carisma. If you want more information about the URL is Carisma.org.co using key. We are volunteering as CIS admin for that organization that is EFS like organization in my country. We are from Colombia. But this one is the smallest electronic frontier foundation. Okay. So we bought also we work as CIS admin DevOps outside Carisma. We started using Debian in 2002. We bought our members of Hackbot that is the hacker space in Bogota. And we are friends. But I live in Montreal and Jorge live in Colombia in Bogota. We want to say thanks for some people and organization. The first I want to say Julian is there. Last moment come here and help us. So that's awesome because we were like having a lot of problems and he just come to help us in the last moment. So thank you very much for your time. We want to say thank you to Carisma because they give us the goal to move their infrastructure into DevOps. We'll give more details in the next slide. But basically when you have an idea, you want to try a new technology. It's not certainly nice if you have used your laptop and it doesn't have a real problem to solve. And you will not have like something real product at the end. So Carisma was a great place. And we got like we wanted to change who was working there because we was having lots of problems. When we started there as CIS admins. Hackball. I write that in Spanish because we say in that way in the hacker space is el espacio en la currera magia will be like the place where magic happens. Why Hackball because the most we know about DevOps and all these new technologies is because we take some time in my holidays or holidays to stay in the hacker space and start doing some workshops and learning. So basically the most we know we learn in that space. The DevConf we want to say thank you because we choose our workshop and that push us to finish the project we started in Carisma or to move forward the project because it's not already done. And because we are so happy to chair that we learn because we know that open source free software is about chairing so we are so happy to chair the knowledge we got in all that time and with this experience. Personally I want to say thank you to software Linux because it's the company I work for and they give me the permission to attend the event. So basically sometimes it's hard you have a nice event but your company don't give you the permission to go so I want to thank you my company because they give me the time also to do the to prepare the workshop. And finally this is an overview of the technologies we are using this project. I want to say Docker was there and is not there because for the workshop and we didn't finish it the Docker pipeline. We did the pipeline before but to do it in the workshop was required to integrate that with the new work we did and adapt all the work we did before to do it in the workshop. So I want to say sorry for that but I will explain you in general way we have the deviant in the center of the equation because we love the deviant. So we try to use deviant as much as we can. We are using I think the next one will be Proxmox. It's the machine we will use to have all the VMs. And we are using different technologies like Kabyn that is the technology that Proxmox use for virtual machines. And we are using also a Bay Grant virtual box for do tests in our laptops. We are using Packer that is the tool that help us to build images. I will give you more details after. So basically we will do deviant images for Kabyn and for virtual box one shot using Packer. And we are using Ansible for provisioning config files management. And for automated test we are using server spec and to like put everything in just one place to do a pipeline and to have a repository. We are using GitLab but we are using GitLab with CI. For people that doesn't know, GitLab CI, GitLab do same as Jenkins do but have everything inside GitLab. So it's easier to get it under starting. If you want to try that, you can do that from the gitlab.com website. So everyone finish the workstation setup. Nope. So we will just wait a moment. Until now people that is there have any question, any particular expectation for the conference I think it would be nice to know good people you tell me who are CIS admin, developer. Okay. No. It's just right to know. Yeah. Just follow the instructions one by one. For the people doing the workshop, just continue following the instructions. Don't talk about some situations in the Charisma Foundation but follow the instructions. The idea is that people that is listening and watching us from internet and people that is there will have something during the time you are doing the setup. Okay. So I think many people will ask, but I think the first question we supposed to answer is what and why. So basically the idea is, as I told you before, we volunteer with a small organization in our country that is kind of electronic frontier foundation. Our idea is to take the control over the whole cycle for web apps. In that case it's campaigns that these people is doing like for example related with privacy in our organization. So that we want to do is to have infrastructure as a cut, continuous integration and continuous delivery. Because we want self-documented and automated infrastructure and we want to have the apps source code under our control. So I will explain the first part is the problem we want to solve is the infrastructure we actually have, have a pure documentation. And we got everything working from the previous admin, but we have no idea what they do before. And every time we have a problem or a new project, we have to go and read all the config files and try to realize what is happening. For example, when I started as an assistant admin, we got a problem with the email and take me some time to realize that was not just one server, that was a proxmo server with many virtual machines inside. And one of the virtual machines was the Simbra doing the mail as a mail server. And we have also like reverse proxy and another machine in there. But basically we have no documentation about this setup. We have just like a general network map. The other problem we want to solve is we got problems with people doing the web development for charisma. Basically that was happening was they were paying some people to do some websites for campaigns and they were using in most cases WordPress. The provider give to the organization the final like WordPress ready to go. We got problems during the process because compatibility and because it was not possible to reproduce that the developer did was basically a cause. So we changed that starting from now. We will have apps not using WordPress but using frameworks and the output will be a website that will be deployed in an automated way in the infrastructure. And build trusted infrastructure. I'm sorry, infrastructure. English is my third language. I'm sorry, it's not clear enough. Most people will ask why we decide to build our own Debian images when you can get your own Debian. Not your own Debian image. Other people did it from the internet. Basically it's about trust. Because we need to be current with our values and with that we advocate for. We are advocating for privacy. We say to the government that they are doing or the way they are doing is not good. We have to be capable to do it right inside because if we do not do that we will not be current. And because also we could be target from attacks because it's an organization that advocates for freedom and privacy. So it's very important to have infrastructure that you can trust. Basically there is in the internet a well-known slogan about that, that is there is no cloud. It's just someone else's computer. So I know there is a lot of discussion about if you're supposed to use and trust the cloud. But in our case because the kind of organization we are we decide to do not trust the cloud to have our dedicated server and to build our own Debian images and to integrate everything using a pipeline and that is that people in the workshop will do. And many people maybe will ask why we are using just one dedicated server. Basically it's because it's the only dedicated server we have in Carisma. So that's why we don't have any staging different for another server for staging. And now this is the overview to show to the other people what the people doing the workshop will do. There is the technology, the name is Packer. Basically that Packer do is virtual machine images and deploys in cloud providers. The idea is that you could centralize your base images in just one place and in that way you could from that place start doing your base images and deploying your VM in the place you prefer it could be if you have, if you use Amazon, Amazon, if you use Google but in our case it's Proxmox in a dedicated server. Basically the input Packer takes is Debian JSON template for Packer the pre-seed config file from Debian. Everyone here know pre-seed? Here, pre-seed, you know? There is someone here that does not know pre-seed? Okay, pre-seed is like the system that Debian have to automate installations. So basically if you want to do a set up that will start and install Debian and you want to put in there all the parameters you usually put in the Debian installer pre-seed is the technology in Debian that helps you to do that. So basically it's a config file when you will write the answerware for the questions Debian installer do when you are doing the installation. In that way, the installation will be automated and will not require human intervention. Okay, so and the other item for the input is Debian ISO image and you will have to provide the checksum to be sure you are using the right one. And in our case, in the output will be, we call that be chaos, that is base charisma operative system is a box for background. And we will get also in the output to call KVM virtual machine. Basically you supposed to be capable to do all the virtual machines in parallel using Packer, but in our case it's not possible because we are using just our laptop and the virtualization technology is linked with the processor. So if you are using one technology for virtualization, the other one will not be capable to work. So that's why we are not using parallel building in Packer. So we are doing that in a serial way. So it will take some time before to get an image working, a functional image. Okay, we choose Ansible for provisioning. So basically that we will have in the input for Ansible, maybe it depends if we will use in the local context for development, we will use the background box, but this is like the right was the cuckold KVM image because it's the one we will use in Proxmox. So basically depending what you are doing, if you are doing some tests provisioning tests, you supposed to use that to do that directly in your laptop using background technology. So you have one box image that is the same that you will have in the cuckold technology in production. So you will run Ansible there and after when that work, you will be capable to move that into production. To be honest, in our case, to finish that, we are working directly in the production that is not really production because this is just a workshop. And the other input for Ansible is the playbook and that will give us Ansible as output is a virtual machine provisioned in Proxmox using Proxmox API. We are using ServerSpec for automated tests. Most people doing DevOps today is not doing automated tests, but it's something, especially Jorge is working a lot because we think it is certainly important to have an automated test before to deploy something. So the technology we are using for that is ServerSpec and in the input, we will have a unit test file and in the output, we will have the test report. If we do that in the right way, you supposed to start doing the unit test file because from the beginning, you supposed to know what you will have at the end. So basically if you want to do that in the right way, you will start taking the requirements, writing your unit test file after you will do your work and when you build, this test file will verify if everything you wanted to have at the end is there or is not. And finally, we are using GitLab CI with his runner for our pipelines. That we have in the input could be a push or could be a merge in the Git repository. And if you do that and you previously register a runner, it could be in your own local machine or in the server if you want to push that to production. That will target the pipeline and the pipeline basically is the way you will take every step we mentioned before and you will put all together to get that deployed in the place you want to deploy that. We will build and we will deploy. So basically at the end that you will have is you push and you go into the GitLab interface and you will look in there the build and at the end if you do that in the right way, if the automated test is good, the build will be deployed in the staging or production environment depending how you are using that. And if the automated test fails, then you will do not have a deploy. But you have to do that in explicit way. It means if you do not write like the small gml or script that will give to GitLab the instruction to go or don't go to the deployment, he will not realize that by himself because he will not know which technology you are using for tests in an automated way. So we will verify who is going the workshop for people that is really doing that in here. So everyone finish this check-in and I don't know if we have a way to take like question by Twitter or something like that by IRC. You are actually in the IRC. So if someone is looking in this workshop and have any questions, could ask in the DevCon channel. We have a specific room? Okay. I'm sorry. Thank you very much. So we have a specific room in the IRC. So it is Boo. That is the same name for the place we are right now. So B-O. The channel for the channel? Yeah. Is DevConf 17? Dash. Boo. Okay. George is right now looking the screen in the search box for Firefox. I just write a channel. Yes. One or two? One. I think it's one. Yeah. We'll just take a tour to verify who is going in the table. I would not do this. Yes. Wait there. So basically you are ready. Good. I did not know where to stop. So... Yes. Jorge. Can you help him so he can change? So he can change? Yes. So he can change the JML? Yes. Does anyone need it? No, no. I've added the key. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. So basically at this point in the workshop with the whole setup they are ready to start doing the build. Okay. So at this point, just give me a second. Okay. So basically that Jorge will take the place. We are all in a new branch with the main user. Then the idea is the CI of divlab take information from this file. divlab.ci.jml and then that's an instruction will be executed in the CI. I'll explain that. And for you to understand the overview of the CI and the integration of all the project. So basically that Jorge is talking about is when I talk about the whole technologies and the input and the output. As I told before, divlab is the one that will integrate everything and do everything step by step. It means the pipeline. So it's using this conflict file that the runner that is agent that will run in the server for production or in your laptop when you are doing tests will take this file and will execute everything we have in here. So for our workshop it's very important to change the tag because when you are using divlab you could have many runners in the project. So the idea is that each one will use their own laptop for the build. So to use your own laptop you have to change the tag for your user. The one you got in the ticket and the one you used to register the divlab runner and the one you used to login in divlab. Okay? In that way when the pipeline will start and will send that to the runner that will run in a laptop before to do that divlab will decide where he will send that. In this case you will put that tag and if the only registered runner with that tag is your laptop then it will run in your laptop. If you use a tag that doesn't match any runner it means any computer running the runner agent you will have no build because there is no place to do the build, no computer and if you choose another user and your build you will do your build in the workstation from the person maybe beside you or you will get not build because there is no one using that tag, no computer. Okay? So we change the digital Freddie The digital Freddie is something different. The idea was to use said, yeah thanks for pointing me at that. Yeah, please don't use the said because the idea was to do that easier but just change directly the tag is three times you have to change that. So digital Freddie is the runner I am running in my laptop so do not use that because if not all will push that in my laptop and you will kill a... You look powerful in your laptop. Yeah, it is powerful but if you said one, two, three, four five lines it will not kill the comparison just you will be called slow You have to wait to finish each run to run the next several days Yeah, it is and basically the idea is you will test that is working in your laptop. So for people listening, I will leave the pipeline I want to say everything we are doing here is a setup for a workshop is not ready for production so the idea is the people doing the workshop is learning how to do that but when you go to production you have to change a lot of things because security restrictions and because you have also to do this fix in the infrastructure you actually have because most cases you will not have the opportunity to start from scratch usually when you are sysadmin there is something there and you will go there to modify to change everything you have in place so go ahead Jorge, I will give you the Well, basically we have a way to order the execution of the pipe and we have three stages build, test and deploy in this file we define jobs each job will be run in in parallel for the same stage what does it mean we have three jobs in this file we have Packer ServerSpec and deploy artifacts Packer is the build stage and at this point we will build the the image the images are the QCOW2 then that is it there the name of the job, the stage of the the stage that will be used for this job the task the instructions here can be a script and a cache let's play cache in a moment we can have more jobs per stage if we have two jobs or any number of jobs the jobs will be executed in parallel but in this case we need the jobs in line in batch then we have only one job per stage this job as Freddy was saying will build the image one per one the first one is the QMO image then we use if you check the ring of this repository the same instruction are here in the ring we can find only only one instruction for build for build the two image but here we will split the execution right this build of course will be images some images will be in the vacant box built by this line that I showing here will be in the box directory then we cache the directory cache is the method for shared information from job to job then if we don't do cache then in the next in the next job we can't find that that image in the next job we will use that box in the next job for do the test the next one server respect is for run the test and it's the same idea it's just we are server respect is right in Ruby then we will use hems we will use a bundle and the test will be run in the virtual box essentially created in finally then we have deploy artifacts job this is the third stage deploy yeah that's an important thing from job to job if we are in different stage the next one will not run if the previous job will not finish well right and that is the idea in this example in general if the test is not passed then you can deploy the build the fact is the build is successful but if not run the test well then you don't deploy don't just not just build well you need to pass the test ok finally we got this will be fine fail because we need to create a directory in the prosmo server for pull rich image created with QEMU and after copied image but we don't have cache here then this file will be no exist ok let's do the example to fail after that we create a virtual machine template in the prosmo hypervisor and finally create a virtual machine from that template ok that is the overview but we can check the detail in the files in the repository ok let's go the first thing is the the packer build then we have a JSON where we define the images to build Debian JSON that does not run the build yet do you push the repo I push the repo ok where did I go too fast no problem I want to install a little bit we are doing and then build all that stops ok ok no problem thank you well we have basically three sections here the first one is builders the second one is provisioners and the third one is post processors ok user this size where does that point to that are variables ok then we have this size ok ok we define variables just for for easy treatment of the file ok we define this size, the name and you show check some type we need this check some is for this image user full name is for an user in the image that will be created and a password and a script for virtual machine a version then we have first the type of builder these are parameters that we define in the variables and finally and for me the most important part is this boot command is how packet will be run the installer for the installation that syntax does that commands are just devian installer commands ok that install auto, vga priority, interface, url pass wd, user full name all of that are parameters for devian installer ok so the command and this last one gm name is the is the name that the packet will be created the image in the hard disk after that we have a virtual box then is the same idea ok then we have qemu and virtual box qemu for the virtual machine is the approximate server and virtual box for the box to do the test the boot command is the same but just another builder after that the second section provisioners we here a script to to set to set an interface a network interface card scripts interfaces it's a network interface finally another script for provisioning the vagrant machine for be capable to use that in the same way a lot of commands out now finally we have a post processor vagrant and this built the vagrant box the idea is to create each section use the previous section artifact if we create here in the virtual box ISO and a virtual machine for virtual box then this post processor will be used that to create the vagrant box I think that is the main idea if we have any questions we are clear until here ok let's continue explain that what part is not open I can I don't know about that I know it's all is open I don't know but the enterprise edition is the same community edition with support that is the enterprise edition you are sure? my company thinking about getting the enterprise edition because it's an additional thing I don't know that that's why there is the earlier discussion do you know which part? but it looks enough but is this github CD is like a good word that you had to your stand out github installation or is it's all building the next part there is there is ready for test but really before that if you don't have this configuration you need to create the environment for test you can do this with server spec something like server spec init and it will ask you for some information like if your environment is unique windows if you are using vagrant or just SSH service or connect to a server to do the test and we can view that in this moment that is ready now inside the spec directory we got another directory the idea is if you have more machines to do test you will find the domain name or the IP address for that machines to do the test inside each directory will be the test for each one of them then vagrant create by default machine with the name default then we have a default directory for that and we have the test server spec do you know a respect in Ruby? no it's a test suite in the same way we have server spec it's written in Ruby for servers machines you can do the test and the test you need to describe a type of test this one is for file then the type of test which file will you test and the test the content of this file should match that string the same file we are doing two tests we have another test which type command do execute update then we captured a CD out to match the output but why we are testing match in dev.devian.org and why security ways if you learn the mirrors that you are using and when we set the precede file we use these mirrors then will be match of course the exit status should be equal to zero there will be okay for the output in that order of ideas we will show you the precede okay this is the precede file and the first configuration is this one the locale then we need to use scope of course for our country yeah in us then we have another that is default configurations show you just that that configuration we do change the other ones no the other ones are defaults here the template which you can find the manual for installation don't have this configuration then we set this mirror it's not a mirror it's a CDN is the way that you decide to do easier to give you a mirror that is near to your location basically it's that kind of CDN but it's before the CDN when you are there he will redirect you into the CDN after you will get the mirror that is closer to you okay then the default user that we create in the JSON file for Packer will be in the sudo group the time zone we want to use LVM a multi partition option or else this is not default but we are using non free country and of course securities open SSH server, sudo, ntp flow rate and that is all just then if we see the test this test are tests for that configurations locales mirrors the user should be in the sudo group the time zone package now this is server spec the test for do the test if the we are testing we are testing okay first we build the VM using the pre-seed config file and after before to deploy we will check that really everything we ask it to the again start to do is there that is important because by doing that we could be sure by example that all the virtual machine we will use will have the local time because many times you deploy a virtual machine and by default they are using your local time and when you have an incident and you need to do a report you will use your local time you will not use the data center time or the US time same for the other stuff but by example in this way we can be sure that Facebook will be there by default nothing will go to production without Facebook and by example we could be sure that every virtual machine we will build will have a VM and this is certainly important because as we are using this one as a base if we have a special requirement that the programmer come after when someone wants to deploy an application will write to us and will say all many space we need and where so basically that we will do after using Antibode in this workshop we will create a new virtual hard drive and we will use a VM to give to the virtual machine the space the application running there by example we will run Docker in that machine and we will put the most amount of space we will extend the last part of the partition for a specific place when Docker will put the first 10 files but it is waiting for the machine to be done it's building it's normal at this point but 16 minutes is I think it's a long time but which processor do you have memory that will depend on that it's working now well then we are clear with the test right at the same time the time depends on the machine because the runner is running into the machine then we will but we will take just 5 minutes left less so for instance you trust the other spec I don't know how it's doing to check that NTP is running it never happens that for instance they have a spec NTP is running and you connected NTP is not running it's running most of the time it's the same result of server spec if you check model in the fact you can do add more tests to your pipeline to go to a production and do that type of tests by the way also if you have the box you could start the box using paper in your laptop and you could run test by test and check if that really works that could be okay well the next step in the pipeline is deploying in this case the deploy is create a template from the QMO image and build a virtual machine okay that part is written in Ansible then we will check Ansible there is a specific configuration but you need a host file and in the host file you have the servers that will do the configurations that do right okay we have just local host if you need to run something local in your machine pve ansible host is this host our hypervisor for the workshop Docker staging v1 is the new virtual machine that will be create we have just trim and this file is called inventory Ansible we have an inventory this is our inventory and you can set other options for that host then we have this directory host bars okay and the pair server we have a bar file the user will use Ansible Ansible will come through for before Ansible will be used to scale and that is all is the same configuration for local host no I'm lying Ansible connection local that means that Ansible will not use SSH to connect to a local host Ansible will not use that and execute commands for the workshop this is all our configuration then we have now roles yes basically we have many options to configure the to do the configurations that we want but we have only roles there is more roles for that example then the first one is okay and is there a mode to use approximate API but we decide to use the command to use the API from the command line just for the tactical exercise we want to know the API then we use the API directly then the first task is you have no machine to create a virtual machine create a virtual machine make a code text to understand it's all and they create the template for that basically on the virtual machine we will create a template I think it's important to say that a template is a kind of virtual machine in read only that will not run and you will use to create new machines from there so basically you will clone that at the end to do a new one and when that one is there it's the one that Ansible will use so perfectly to continue okay that is our useful create a virtual machine then create a virtual machine from the template easy just the one the template is possible on the who will be we will create with we will create with backer that machine only have 5 gigas from this then we need more then we can create we can create another another disk to arrive at LVM and then resize the virtual machine resize is the new hard drive with more and define the number of cores and amount of memory possible on the template and finally we run the virtual machine and that is all the last one we are not using that one in this moment but to create users for this user we create a group called robots who is in sudo then we create a robot user that is in robots group to have a sudo and this is our configuration it is important to say that at this point the password we use in the build to connect to that virtual machine so basically we will be like in the first step there is also some security stuff like change password for SSH but the first thing we need to use as well directly into that machine and not using a password because it is the right one is to have a user that could use sudo and the one we decide to use for Parisma is for everything that is related to machine automation is robot so that's why so after that the SSH key and now we will not use the password to get into the virtual machine and finally we change the password for the user created in pocket that is in here in the file then we use a hash for change the password but it isn't this one we are not using this one right now what I don't understand is that in this step you create a template each time and then you create a virtual machine from this template each time for the workshop we do that but we really need to create a template and stop at that point and if we need a new virtual machine then we use that template for the new virtual machine but here is what we want we are using that in this file then this is a playbook it's composed about two we need hosts and tasks we have tasks and then we have a play with that is the servers that will be used to apply that the target we use the the role to create a template with mid for memory and file name for the but basically we will connect into the product smart machine and we will create a game using the value we have in there at this point we are creating the the pattern sorry and it's supposed to be a docker post and something important to say about the point of docker is when you are doing infrastructure as a cloud and you are doing also CI CD for your organization you will have a different pipeline for the infrastructure and you will have another project with another pipeline for the application so basically we did the first part and after is the point when someone come to you with some requirements it will require you will deploy a new virtual machine if not you will not and you will run another pipeline you will do just for the new application and that pipeline will create containers in the docker post you put in the pipeline so basically we are talking about the pipeline for the infrastructure first step and after you have to do another project with the application and with the pipeline for the application this creates the information and the application that is all because we need to wait to build the package because our original idea was to use the package provider as a provider for package and in package it basically is not working so we did work around for that we found also a bug in background because the bugs we built have the background file inside about when you run background app it's not taking that file so we are just taking the file from the old historian in the right place we found also that the package for deviant is not maintained but every very old version if someone want to take this package someone already did it but never really did it so it's a package way for help and who is the user? are you a user? then you need to add the developer user can you say and we found also we are waiting you but still that from there proceed and take the hostname and that's a very very old certainly it's an old problem I think this time we will take time to check if there is a bug report and it's not when you are because we use the package version package in deviant is from September 2016 so it's not so old actually I run your tutorial with this package version only 4 or so so the problem virtual books is not in the archive like that it wasn't installed it wasn't in this it wasn't in the description to install it sorry what was interesting is the pipeline we want to say it's hard because every time you want to do the iteration to verify especially when you are building images takes time so if you fail you have to build a full deviant again but the deviant that after you did it you would have use a beautiful template in Proxmox you would just call that and you would be sure that will work it was also interesting it was hard always like what was the the key of the deviant because we were thinking we would do that in that way and not because it's in an automated way that is not right now so we would be after so we changed the approach I don't know how many times so another question why don't you use an uncivil package in deviant because you said to install uncivil from people in deviant we should use deviant except if you have a very good reason for that it looks bad but we know that that version would work because we are not using this we are using deeper and much more different system in that moment we take the decision I was using my work then my job then we can have the same version for because certainly talking about the whole process like in the within the easier way is if you have use carrying technology in the whole process because you are using miracle box to have the opportunity to use that also outside linux because the idea is you have all the time to use one of your machine and you can test that in your laptop and you can move that to production and if you do not use proxmox use KBM I think it will be easier and more it will be always just working in the same artifact so that you did that here exactly the same here is exactly the same if you are using another visualization technology something that happens the idea is not why we are using factor but certainly with the easier way you take KBM and you just background with KBM basically that's all right now is the end of the workshop do you want to talk more in a more informed way we are going to the the workshop linux cocktail so we can take some drinks if you have an informal question thank you very much we know that was hard to follow by the internet we know that was hard to follow even in here because it was hard by the way we are happy to share this knowledge with you and good evening thank you