 Okay, so my name is Randy Witt. I'm from the Intel Open Source Technology Center. The talk is a super long title, cross-platform enablement for the Yachto project with containers. Cross-platform not just meaning Windows and Mac, but across Ubuntu distro, or across Linux distros and everything else. So let's get started with talking a little bit about my personal problems. I have a lot of them, but with regards to this, just what got me started working on this in the first place, it wasn't the idea of, oh, I'm gonna create this thing that people are gonna go consume. It was these things are annoying me. What can I do to make these things not annoying me anymore? And so the first problem was the multiple distro problem. So for instance, in the Yachto project, we do QA on an auto builder. And so we're building across multiple distros to make sure that when somebody else is doing that, the builds work and everything else. So when one of those builds fails, and it looks like it's only failing on a particular distro, my options at the time were actually to SSH into the auto builder, clone the repo again, set up a build directory, and then run the build actually on that builder. That's a pain for me. I don't like that, because I already have a workspace set up on my workstation where I'm already doing all my work. The second way to do it would be, you could go create a virtual machine for all the in number of distros that you're testing against, and decide to run one of those when you need it, or leave them all running, which is almost impossible unless you have a ton of RAM. And you could go try it there. And those options I didn't like. There's a lot of overhead for me. So that was the first real problem that started driving this. Another one was that BitBake sometimes leaks processes. So if you're doing a build or something and you control C to kill it, it will try to clean everything up, but it doesn't always do that, right? And sometimes you'll notice that it looks like it doesn't even respond at all, and then you'll go kill BitBake, and then all the processes that it spawned are lying around. And so then you have to manually find them and kill them if you can tell what they are, or you might not even know that that happened, and that's not good either. So I didn't like that, that really bothered me. So those are the two main problems. I don't have a slide for the fact that running toaster was a pain for me as well, and a lot of setup. But that was another one because there is a toaster container I'm gonna talk about. So this is about containers and particularly Docker as a solution. That's what I used. It's not that you have to use that, but that's what I did because it was making my life easy. That's what all this was about. So I'm gonna try to do a quick overview of containers and then also Docker, but it's limited time and there's lots of demos and I'm trying to get through everything. So it's very quick overview and I wish I had more time. The first thing I'd say is containers aren't magic because I've talked to a lot of people and they're based on Linux kernel features and they were there and then people just started using them and this container concept came out. And it's not that the kernel features aren't awesome and amazing, but it's that it's just containers are leveraging these great things about the Linux kernel. So what enables containers as we know them right now is the Linux kernel. So and we're at the embedded Linux conference. So Linux is great is basically what this is saying. But what it does is there's two main concepts that they're using. One is namespaces, which it uses for isolation. And so I listed three here, but there are actually more namespaces than this. And so there's PID network and mount listed here. And those are the common ones that you would be thinking about if you were using a container. So there's a command called unshare that you can install on Linux and you can switch to a different namespace using this unshare command. And in the simplest terms, running nshare and just switching to a different namespace for like processes or network or whatever, you're essentially getting a container at that point. But most of the time, things like Docker and some of the other tools, they also use cgroups. And that's for the process encapsulation and then even doing dynamic resource management of like I only wanna run in these two CPUs or I wanna run with X amount of RAM instead. But that's just another kernel feature. Again, systemd uses that. I mean, everything that runs in systemd, for instance, is ran in a cgroup. And containers are just leveraging that as well. So particularly with the cgroups is the bitbake leaking processes slide that are the problem is cgroups are fixing that is because if you kill, if you look at the cgroup and you kill something in it, everything that's in that cgroup goes away. So I don't have to worry about it escaping. And so the last bullet is just pretty much the tooling that you're using to run containers are going to be using these Linux kernel features. And so I put this slide in here to maybe make containers not seem so intimidating that somebody just went and invented something out of the blue. You can build them up from the ground just using Linux kernel features. And I threw this quick slide in here just as an example of like if I did do a pid namespace. So in the top part where it says inside container, that's a shell running in a container. And so what I did is I ran a command sleep 5,000 backgrounded it and then I run PS to show me the details about that process. So in the top you can see that the PID is 32. Well outside of the container, I ran the exact same PS command to show me what that process was. But you see that the PID is 8257 there is the exact same process, but for all the container nodes and everything that runs in the container sleep 5,000 is PID 32. So that's what it's doing is it's containing that process space, but it's just using a PID namespace from the kernel to do that. So the other thing is Docker specifically, right? And I mean the nice thing about Docker or one of the nice things is like even if I don't talk about something there's tons of documentation on Docker. Loads of it on the internet. So you could probably find every answer that you had a basic question about on Stack Overflow because somebody's answered it. But the simple idea is that there's a thing called a Docker file that has commands in it that say how to build an image. So I wanna install some things or run some particular commands to create the image and you go and you write this Docker file and then you build an image from it. And then what it does is when you run a container it's actually a temporary instance of that image. So basically a snapshot. So any changes that you would make to the file system in the container will go away when the container exits, right? They're temporary or ephemeral. So you can commit those changes back but you actually have to take extra action to do that. And so usually what you'll do to get around that is actually bind mount something in where you're actually gonna put output. So here's a sample Docker file, super simple and you don't have to understand this completely. Like I said, there's a lot of documentation about this. But the first line is just saying I wanna base this on this image that already exists. So in this particular case I'm basing it on Ubuntu 16.04. So that means I can use app get and everything else that I would normally get in Ubuntu 16.04. So then the next command just says hey I want you to go run this to install Python. So I go install Python into this image so that would make sure that when I run an instance of this image it's gonna have Python in it. So I can use it. And then the last line is just what runs, what command actually runs when I say hey actually go start the container. So in this particular case even though I installed Python which is just useless for this if I said run the container all it's going to do is say hello from inside the container and then exit. It would be a very simple container. So the way that you would run a container is with the Docker run command. And this is basically the format that every single Docker run used in the presentation would follow. And so the first two arguments are the dash hrm and then the dash it. And for the most part in all of this you can just ignore them. But essentially what's happening is rm when Docker run starts a container after the container exit it leaves it around. And there's no reason for that if you're running a container doing something and then leaving and you never plan to start that exact same instance of the image again. So that's all that is. It's nothing super special and you could leave it off and it would still work. This is just trying to help keep your system clean because you'll just have oodles of containers laying around if you don't. The next one is just saying hey I can actually type into the terminal and it will do something and give me a TTY. The next one the dash v is a really important one because it's saying expose foo into the container as slash bar. And you don't have to call it slash bar but I put it this way to try to illustrate what's happening. So essentially if you have a directory called slash foo on your host system and you ran this companion when you were inside of the container it would show up as slash bar. And if you know what bind mounts are that's actually how it's achieving this. And the last option is the name of the image that I want to run. So this could be Ubuntu 16.04 or in this case C1 as the example. So I tried to put this together to maybe visually illustrate how this is fitting together. So of course you have Linux here and I have a directory called slash foo on it. Nothing special. So this is a command that I had before. I left out the dash rm and dash it for space reasons and to try to focus the attention. So what happens when I run this? Well of course Docker's gonna start and he's going to then after that he says I'm gonna create this container for you. And then he says okay now I'm going to expose foo into this container slash bar. And that's really all that's happening. And the reason I left the box around bar the same as foo is because it's really the same thing. It's just called something different. And so then if I ran another command to say hey Docker I want you to run another container called C2 and I wanna expose foo as Baz that's all that's gonna happen. And C1 and C2 are completely different and they don't know anything about each other unless you explicitly tell them to. And they're seeing the same data called different things and that's really all there's to it. So now I'm gonna talk about the containers that I actually pushed out to Docker Hub or the images more correctly. And go through some demos and just try to show you like what's available in case you were interested. So the first one although potentially a terrible name but like I said I never expected anybody to be using this was the Pocky container. It does work with OECore and BitBake. So if you checked out if you clone BitBake and we're using OECore instead that would work as well at least according to a coworker of mine I haven't done it yet but he does it all the time. And so this is what the command would look like if you were going to run the Pocky container. The first part should look very familiar is the boilerplate RMIT. I'm saying in this particular case by-mount this my stuff directory in and expose it as slash workdir and then this other argument that dash dash workdir at the end which I'll talk about in a couple slides. So the default right now is based on Ubuntu 14.04. Richard just told me that they have deprecated that out of the release cycle for the Yachto project now so that will soon say default based on Ubuntu 16.04 because that's the new LTS. That'll be really easy to change. It's basically that if you remember the from line in that docker file that's all I have to change. You can use a different distro for this because I built a lot of them because that was one of my problems that I was trying to solve. And on that line you see all I changed to say I wanna run on Fedora 24 rather than the default of Ubuntu 14.04 is this one argument and then a completely different thing that's going to run. So here's the workdir argument and it's just basically, it has more functionality than this but I'm trying to not overwhelm you while still explaining things but it's essentially where am I going, where is my shell going to be at when I actually get dropped to a shell inside of this container? So I'm going to get dropped to a shell and run git clone or whatever else I wanna run. What is the CWD? And then all of these containers that I've created you can pass dash dash help to them and they will tell you what arguments you can pass to them and I'll actually show that in my demo. And so here are all the distros that I currently have or that we currently have on Docker Hub and I'm more than willing to add new ones if you do pull requests, if you look at the format it's relatively straightforward and very quickly I'll just say the way this is all set up is that when you do a pull request it'll automatically run some builds on Travis to make sure it works, it does some smoke tests and then I can say okay yeah I'm gonna commit that and push it but yeah all this stuff happens automatically and all of these are rebuilt once a week as well because I'm trying to pick up newer versions of packages in case there are CDE fixes or anything like that. I could do a talk about that but. So now we're doing a demo and I screencast this because I actually run full builds and stuff like that so I can speed up time in certain places but make it so that everything's kind of connected for you and it never seems like oh well now let me skip to the part where I've magically had all this stuff run that you didn't get to see run. There's my mouse. Okay so here we're running the or creating a work directory and now I'm, which is where all my output's going to go and how did I get here? Ah, it skipped like 20 slides on me I'm sorry. Okay, so we're gonna go run the, oh it skipped four. God, this is media for me right? Does anybody have any idea why it's doing that? I have no, yeah and I mean and what it's doing is it's not actually playing where I'm at, okay there it goes, sorry. Okay, so we're first running the command with dash dash help like I said you could run just to show that you can and then I go create a work directory which is where all my output's going to go. This is completely the wrong one, just one more time. I hate PowerPoint by the way. Yes, so let me pause it now. Thank you for your patience. So this is the first, you can see that I ran help with the Pocky, the Crop's Pocky image like I said that you could do. And so the first thing I'm doing is I'm just creating a work dir which is where all my output's going to go that is the directory that I'm going to bind mountain to the container. Now I'm actually going to go run the container. Should look really familiar based on like the previous slides and so the thing I'm highlighting here in red is you can tell that you're in this particular case an indication that I'm running in the container is my shell prompt has changed, right? It says now I'm this guy called Pocky user and I've got this crazy hash here or UID. And that's what Docker's doing is saying oh my host name is this magic UID that got created. You can change that but it doesn't really matter. So now that I know that I'm in the container I can be sure to do whatever I want to do and in this case I'm going to act like I don't have anything and I'm doing what I would normally do if I were going to use the Octo project which is go clone Pocky. So I do a clone of Pocky, it happens really fast because of magic and you can see the Pocky directory there. And then of course the next thing you would do right is you're going to source OE and it build in which is all happening in the container just like you would normally do and then you're going to go run whatever commands you would want to run. But before I do that what I'm going to do is what I've done is I've used Tmux to do this but the top half is the container, right? That's running in the container and all Tmux is done is open to shell for me in the bottom that is not running in the container. And so what I'm going to do is I'm actually going to edit the configuration file outside of the container to just point out that you can use all the tools on your host to do your editing and everything like that. You don't have to do that in the container. You can just use the container for building which is what it's for. So I go and I add RM work to this config file outside of the container. I change back up to the shell in the container and I just say, hey show me what's at the end of my configuration. And you can see the RM work is there because my work durr on the outside is the same thing as slash work durr inside of the container. So now we're actually going to do real work which is Bitbake and this is going to be the fastest Bitbake build you've ever seen from scratch. So magic, more magic happening. And so I've finished my build and so I should have core image minimal now. And so I'm going to LS the directory where the image output would normally be to show you yeah, there's actually an image there. So there they are. So core image minimal, lots of them. So now I exit out and I say now my container is not running. Well let's go see what's in my work durr build, temp, deploy, images. And I should see the same thing. And there they all are, right? The container is no longer running but since I bind mounted that directory and it worked. So now I'm going to go run the command again the exact same command but because if you remember I said all I have to do is change one little thing is I change that to now set a WN8 after it, right? So run the same command. Now I'm running in a container that's actually based on WN8 rather than Ubuntu 14.04. All I do is source oenit build in again like I did before and I say Bitbake core image minimal. Now normally this would be almost a no op but because I added rm work it actually has to do a few other things but it's really using the same thing. You can see that it didn't download anything new or go run a bunch of extra tasks that it wouldn't normally do. And the only reason that it even ran the root FS again is because I turned on rm work and that's just something that happens. So that's the pocky container. So now I have another one, there's another one called the extensible SDK container. If you've heard talk about the extensible SDK like if you were at the BOF or something like that or Tim may even have alluded to it if you were in his talk and actually Henry is doing a talk about it tomorrow or this afternoon, sorry. This is a container that is essentially for the purpose of downloading an extensible SDK and dropping you to a shell ready to run any of the commands that you would normally run in the SDK without having to source the environment scripts or anything like that. It does work with the regular SDK as well, the non-extensible SDK. This is just one more time of, this is what I named it when I originally created it and it hasn't been changed yet. But I've highlighted in red essentially the part that is different from the other Docker commands that I was running or the other instance that I just previously serrated the pocky container which you can see that I have crops ext SDK container now which is the image name and the argument which you would be able to see if you ran dash help of the URL and this is where does the installer for the SDK live? And so if it's already been set up it will look at the directory and see oh, there's an SDK here, I'm not gonna go download that for you because it's gonna overwrite this and so if you were going to run it and you've already like set it up you would just leave it off and I'll show an example of that. So more demo. Let's see if I can get it to not magically jump forward. So I'm running this again with dash dash help. You can actually see the dash dash URL argument it tells you what it is and actually a workdir argument which I'm not using but it's all in the help. So now I'm gonna go create a workdir again just to start from scratch. Clear screen. And then run Docker to run the extensible SDK container and I'm Pat and I'm going to put the URL for like one of the SDKs that is published as part of the releases for the Yachto project. So here it is downloading it this is all sped up again because it takes time and it's setting up the extensible SDK and now you're at a shell ready to run any SDK commands. So it says I can run DevTool help so let me run DevTool help and see what happens. Ah, it worked. Magic. So now if I run DevTool build image which would be a common thing for you to do it's gonna actually go build the image and this is all running in the container, right? So you can see the output that is created and I can exit out of the container and I'm trying to set up a process or a mode of thinking here is I'm gonna look in the workdir and I see that the same thing is in the workdir that I bind mounted in even though the container is no longer running. So now if I try to run it again with the URL like I said he's gonna yell at you and he says he's a big coward and he's not gonna overwrite anything. So all I do is I run it again and I go back and just delete the URL part. So now I ran it but he didn't do any setup because he says hey there's already an SDK here I don't have to do that but I can just go immediately into running SDK commands again without sourcing any scripts or doing anything else and you can see that he didn't do anything because he didn't need to because I had already ran DevTool Build Image. So now I'm gonna delete that work directory and start from scratch again to show you the other mode is because it's common that you might download an SDK manually and just already have it on your disk and so I showed where you said go download this for me but here I'm just gonna use Wget to download the exact same thing that I downloaded before except use the container to do. So now I've downloaded it and you can look in the workdir and see that it's called my SDK.sh, right? So I'm gonna run the container again and I'm gonna give it the URL option but you can give it an absolute path and so here's what's happening is I put it in the directory that's being exposed to the container and so now the container can see it and so you use the absolute path that would exist as it is in the container because otherwise he doesn't know about it. So now I can go run this and he didn't download anything he's just extracting and setting up the SDK for you so if you've got multiple ones that you've downloaded manually you can run it that way instead and so even if you just don't like the TDM of running the installer and then running the setup scripts and doing all of those things you can just use this as like I'm super lazy do this for me and Brian if he's in here does that all the time because he's really, really lazy and no I like Brian a lot so last one I'm gonna demo is the toaster container it's very simple it runs toaster and if anybody's ever tried to run toaster you wanna like hit things sometimes when you try to set it up and get it running. I've highlighted in red again the things that you should pay attention to there's this dash P argument that can be passed to toaster and that's essentially if you look at this let's start with the last part of that which is the 8,000 so by default toaster runs on port 8,000 so what I've done in this command is I've said okay well when you run this this time expose it to 18,000 you can do 8,000 again I just made it different so you can see there's you can pick what that is so you say running on port 18,000 and I say running on 127.0.0.1 because I only wanna show it on local hosts I don't want everybody on my network to be able to see this you don't have to do that but I yell at people that don't so because there's no authentication mechanism for toaster and basically then they can just go start running things on your machine if they find the page so local host is good so here we're gonna go run this stuff more fun same thing I've setting a process hopefully create a work directory you see that I'm adding this new argument to it that says hey run this on port 18,000 instead and bind mounting the directory in or specifying where it's at saying hey go do toaster so it goes and it starts up toaster and it's actually faster than if you did the first run of toaster yourself because normally when toaster runs it downloads a bunch of stuff from the layer index and all of these other things the toaster container's already been primed with all that information so it did it for you so now you can see it created some files here and so let's switch over to web browser now because this is where you run toaster stuff and so I could go in here and I'd say okay show me what's running on local host 18,000 and I get the toaster splash page I'm actually gonna go in here and go through the steps of actually saying build something like I've created a project I go over to the build and I say let's build core image minimal because that's what I've been doing in everything else and I say build and once again by magic time travel I make this happen super fast and it finishes and I can have and this is what you would normally see on a toaster output screen right but this ran in a container on my Linux workstation and you can see okay well I have some output on the shell that toaster spit out as well where it was doing some work let's go down and let's see what's in build toaster 2 where it would have put the output are my images actually there well they should be so there they are right just like in the other examples except now toaster ran this for me so if I exit out of the container I can go run I think I do yeah so I'll go run Docker again the exact same command and the reason I'm doing this is to show well I just ran this one run of toaster and I created this output let's use the same work directory and see what happens when I go to the toaster web page now oh well it remembered right because the instance of the container was just writing data out into my work directory none of that's preserved in the container the container's just running toaster and using data that I are using a directory that I told it to use so ran the container generated some output exited the container and I can go back and run in any other time and get back to where I was in my toaster state so that's all of the ones actually there's another one but I don't have time to talk about it these are the important ones so so other platforms right that's really compelling and interesting to a lot of people why because Bitbake does not run natively on Windows or Mac and there are technical limitations not I won't go in a lot of detail not just running Bitbake but the problem is you have all this metadata that's going to go build all of this other stuff and the problem is whatever it does has to also work natively on Windows and Mac and so all the tools that you might use all the native tools or whatever you have to make all the all the metadata and everything that it does also work and that's very difficult so and I say running container of the Mac OS because that's what I'm going to demo today so the differences basically are set up and there are instructions at this link and I think they're trying to move them to the Yocto project wiki instead but if if they're lacking if you go try to do this they're lacking or something please tell us and we'll update them because the idea is that if you want to use this it's easy to use not well let's go get tribal knowledge everywhere of how to do this so that's the first thing and I'm not going to go through that because that's a lot of it doesn't really follow the trend of what I've been doing here the other thing that's different is it actually runs in a hypervisor so we're running Linux that's what's happening and because Linux is great and what lets us do containers right and the other thing is it uses a Docker volume rather than a bind mount and how to create one of these is in the instructions and and basically it's Docker owns this data rather than your bind mounting something in directly from your host right and I try to go through this in this diagram again so here's where we left off on Linux right so now what's what's going to change so if you see where it had slash foo before I've changed this to a vol and that because that's like the name of the volume that I'm telling Docker to use so it's not an absolute path anymore so I've changed my command so let's go through what is different right the first thing is that okay yeah I'm running on Mac OS now the second thing is I'm running on a hypervisor or running Linux in a hypervisor right the next thing is foo goes from being in Linux to being a volume that for that Docker owns right but it still looks like bar and bass to C1 and C2 and so the question then at that point is well I'm running this on Mac OS or whatever in Docker owns this volume how do I see the data on my Mac right well what I did is in the instructions as well I made a container that you can use for Samba and so if you followed all the setup instructions you would run this command which says hey go start this Samba container for me so I can see my data so then what it does is it goes and creates a Samba container that has a WorkDurr in it and it says okay expose that back to Mac OS essentially over IP right so I guess what I wanted to try to show with this diagram is from the hypervisor up it looks exactly the same as it did before right it's the same thing Linux um so here it is running on the Mac so the first thing I go do is I run Docker because it's just an application on Mac OS right and you see it starting up in the top right very timely there's a little whale with containers on it animating so I run a terminal and I run a browser and so the first thing I'm going to do because I'm running going to run the toaster containers I'm going to go to local host 18,000 and there's nothing there right so now I'm going to go back to my terminal that's that's the it's just a term and I'm going to run the exact same command that I was running before except right here you see I have my volume instead of slash foo right because this is a volume that I've created um but other than that it's the exact same toaster command is when I was running it in Linux so I run this very slow type I should have sped that up too and this is very familiar right it's starting toaster let's go back over the web browser and just hit enter and see what happens now other something there the toaster web page so this is all running you know on the Mac and the hypervisor so now I'm going to go over and just open a new terminal or new tab where I'm going to go start that summit container that I had mentioned right all you do after you've done the setup so now you can open finder which is that's what you do on Mac explore whatever and um you can say and that's really tiny and I apologize for that um but what I'm typing in is the is essentially the url of where my Samba server is at and it's the work-dure share now if you're super um or you have super ocular capabilities and you can see that this is one two seven dot zero dot zero dot two instead of one this is a quirk of Mac and I'm more than willing to talk to you about that after but we detail that in the instructions and how to get that to work it's just something you have to do um so now I connect to that server I say hey I'm a guest right and oh there's my output and god it's so tiny um I promise you what it says is build and um if I go up into my container and I ls I see toaster sqlite and toaster web and but down here down here in my nice mac file browser I can see the same files so if somebody uses mac and that's what they're used to this is this would be familiar to them so I go over into toaster and I'm just going to build core image minimal exactly the same way as I would have before now I only gave my hypervisor two cores and eight gigs of ram so this takes much much longer um I still sped it up for you though um but if you pay attention I think it says like two hours and twenty four minutes or something like that um is that what it says ah yeah so it's a really long time but I'm using not a lot of resources um so I can go over here and I can see now I'm clicking on the output directory going to the deploy um and I can see my images in my completely in mac land right the magic is happening to convey to the user that they are doing the normal mac things that they would usually do um and that's what I'm shooting for um and I will say that when I started working on this I had no intention of doing mac or windows honestly um that was just kind of a gift from the docker people with the fact that they've made their hypervisor so transparent because yeah you can do this with virtual box in all of these other ways right that's not anything new but the transparency of oh I just opened a terminal and I run the exact same commands that I showed you in all the other demos and workflows is this that idea of like I've set up a workflow around docker that works why not do it on the mac why not do it on windows if um help me out with doing this nice hypervisor and I guess the question would be what about windows um well there are instructions for windows for windows 8.1 and so and it uses the docker toolbox which I'm not particularly fond of because it actually it tries to be helpful and it installs virtual box and still tries to be transparent but it's not quite as nice as on the mac on windows 10 where you have hyperv it follows much more this um this mode of like oh wow it's really transparent right because you're it's using hyperv as a hypervisor and so it doesn't have to install anything else I don't yet have instructions for windows 10 on the wiki um we intend to do that or if you're particularly motivated and really want that I mean you can help out and and write some it's really straightforward um alright do you have any questions yes oh yes thank you Tim how about now yes so how do you the question is how do you version your code with your build environment so let me make sure I'm I'm understanding what you're asking um are you talking about the changes to the metadata or um well you might want to upgrade yachto at some point right so the trick is that I'm putting all of the stuff that is persistent in this work directory that has nothing to do with the container right so you just do that however you want right because it's not taught it's not attached to the container in any mechanism whatsoever right so you go clone pocky and you put it in a directory and the container is just running command it's just letting you run the command so it's just a fancy wrapper of if you go to the yachto project uh reference manual and you look at it says you've got to install get you've got to install these other things right it's it's a quick way to say I don't have to do that and then I can use this to show this to show this to show but you don't have to clone inside of the container actually my workflow that I do is I actually bind mount which is an advanced thing and I'm more than happy to talk about it later but I bind mount multiple directories in there one of them is my source directory one of them is my build directory and all of these other ones right so it doesn't all have to be in the work or the reason I set this up is so that it's a I did it that way so that's a easy introduction or quick way to get into it and then you can do it as flexible as you want but except in the case of toaster toaster actually does have a clone pocket and we actually keep multiple versions of the images out there so you can use one based on morty you can use one based on actually master and you can use toaster based on I guess krogoth is out there so in that particular case where we have actually cloned a version of something we have multiple versions but for the pocky container in the extensive wask container it doesn't matter because all the all the code is coming from the user is that answer your question yeah but I might I might follow up with you okay absolutely yes oh Mike okay it just so I actually have a couple of questions and you can stop me okay so uh I've been using a docker container for like three years now to build and your design approach is a little different than mine so that's where my questions stem from you kind of hand waved over user ID so by default the containers will be root so obviously you're creating a container because pocky does not like to be built with root uh how are you handling user IDs um I could actually probably do a talk on that okay I feel like how I got quick is um so that's what that's one of the things that's magic about the worker unless you pass in arguments okay looks like the uid and the gid of the worker I've set up a sudoers file that allows me to run user add commands it dynamically creates a user when it starts the container that matches that so every single time you run the container dynamically creates a user inside of that container that runs what you that lets you run what you want so then you match but you're not going to be matched to your outside user yes you are we'll talk about that more after offline yeah so uids are not they they're uid and gids are I know it's like so there's no name attached to them if you're fetching the outside uid and gid then the mac should just work just fine without having to do samba because that yes it does it doesn't you don't need to do yes it's for because I was doing the cross when I did originally when I originally that would not work on windows right correct yes so yes it would so you do not have to use a volume for mac it would and he actually does bind mount directory straight from yeah that's on my but there there's a bit of work that I did I wrote a little uh application called user setup and and I did the sudoers thing because I won't let you actually create a user that has the zero id and that's not for security that's for me to try to prevent the user from shooting themselves in the foot right I accidentally ran a root and I didn't mean to uh and I guess last question would be why all the distros why not take an approach of just dumb it down and present the container so like for example I have all the people that I work with alias what I just have a script like containerized it's like just install that an alias bit bake and dev tool to containerize bit fake containerized dev tool that way you would abstract away all the fact that there's a container involved at all and you could dumb it down for people even more and you can write whatever wrappers you want around this I'm saying like advocating it so that there is one blessed build environment you see how does that make sense well I'm not well because I'm not trying to push a particular build environment I'm trying to give people flexibility right or actually I was trying to give me flexibility right and in my scenario the multiple distros are very important and I mean there was there are numerous times where you know I'm chatting with the yachto development team in IRC and they're like oh this doesn't look like this works on this distro and I'm like oh let me try it real quick because I can just do it in the workspace I'm already in right that's really important to me and so I guess that's one thing I want to say is I'm not trying to push this at the solution this is just something that I hope that if it's helpful to other people they could either use it as an example or build upon it or whatever they want to do looking at things as they're being one solution that's very good for individual teams and I highly suggest that right like like script the heck out of it right but the what I tried to do is if you notice I say I don't say download the script or run curl and you know pipe it or run W get and pipe it to something else that's a common way that people do things now I was trying to be sort of transparent because I also want to teach people like what's happening and maybe that's the wrong thing for me to do but that's how I approached it when I was doing this but you can it's like I said I if you have great ways of doing things I'm interested in hearing them because the more I learn is the better it's going to get right so I think that's why I put the problem slide in there is try to show like here's why there's like 13 different containers or whatever yes uh you want to you showed like five or six different distros that you're running but are they all just using the same kernel that you're running in the host yeah because these different distros are all based on different versions of the kernel that's why I'm asking this question and how does that keep your interoperability tests valid well the first thing I'm not using the container to do interoperability tests I'm doing them I'm using them to quickly test things that are that probably aren't kernel related right so yeah if you if you want an isolated build environment is the only way that I think that you can really do it right now is to do a vm and not even use uh hardware extensions I mean if you really want to be crazy anal about it right you can't even let it use hardware extensions because then your hardware is impacting what you're running as well right that's probably never going to matter but you can go completely in that direction I did this more as a a flexibility thing of like or for instance another great example is I upgrade to four or 25 right and the compiler now no longer builds OECore or Pocky because it's too new right well now I just go use my container and I can still build it but still keep my you know workstation distro as new as I want or use arch for instance which is yeah so there might be issues with the kernel but except for major ABI changes across like system calls and things like that you usually wouldn't run into a problem and that's what most people who run containers are assuming as well that run them across enterprise right just resources you mentioned your wiki and the deck do you have URLs so and yeah so the original was that wiki URL and then this is all of the individual containers um and I think the I think the wiki actually links to these two and if it doesn't we need to make it do that um what's that yeah this our graphic designer that made me pick that color so no I can't easily honestly um made mine orange they do what's the wiki uh yeah so you can go to the ELC website and actually click a slides link and download all of these um yep that's it right so just another example how to use this I used his same infrastructure to test because I was supporting the Eclipse plugins for Yachto project and I needed to find out for the new distros what was the minimum install in order to allow the Eclipse plugin to be built and I did it with these containers all I did was change a little bit about what was installed and what it was running so this uh framework that he's done is really extensible and I was going to do my dev tool demo uh talk with with the containers on the Mac but I had one package that I needed to demo that didn't build cleanly and we haven't figured out why yet and so that's why I didn't do that no more questions I'm gonna keep this really quick on Windows 10 has it been tried but not documented yet or is it in process yeah so and I'll just tell an anecdote with it is so I have a Windows 10 and I should have clarified it's Windows 10 pro is the only one that has Hyper-V in it right that's I don't know why Microsoft decided to do that but that's the way it is right now they may change it but um pro and essentially up um so I have a laptop it's a Zeon laptop I didn't I got it from somebody else I didn't even know they put Zeons in laptops but is and it had four cores and my workstation is a Zeon that um has like 20 cores or something like that and and so what I did is just a test I'm doing this on Windows 10 is I did a build on um on my Linux workstation and I basically used uh I limited the CPU set essentially I said no you only get four cores right and to try to mimic the Windows laptop that I have I did that and my Linux workstation is actually an odor rev of Zeon then the laptop was right so I did that and then I went and I did the build on my laptop and it was actually faster on my laptop than it was on my Linux workstation because it's because of the improvements that they've made to the Zeon right so I basically faked out using containers the build to say you're only four cores and I still so yeah it works on Windows and if you have the hardware I mean the performance isn't going to be that much worse either it shouldn't be at least from what I've seen but yeah I've ran it on Windows 10 Pro okay thank you yeah the the Mac instructions would for the most part be the same as the Windows 10 instructions you would go download Docker and install it and then do those other commands I just haven't expanded that on the wiki yet