 Okay, I'm going to talk about the familiar image build service, something I've been working on for quite a lot of time. Is that feeling that we finished once before? You have to start in front of it. I thought you were going to be on the video. Well, I'll be on the video of an ace in jubilee. I want to make sure the slides were in-table, so I feel like those require a little more context. But my voice is on the video. Anyways, so we actually finished this once before, and then the software format changed, and some universal truths went all the way through. So we got to start over again. But now, we've pretty much done all the things, all five of our first things are in a bit. And, if you're kind of waiting for it. No. It should be started. Just working in again. Yeah, okay. Alright, so we're going to talk about work, but I'm going to discuss what we're going to talk about today. And if these topics still challenge you, you feel pretty ecstatic, you don't have that much feelings, please find something that you do find interesting, because it's a lot of great stuff going on. So, we're going to start with containers art, and the complex learning system. I should do this just because it kind of levelsets what this is, where the motivation came from, why we're doing this kind of thing. I'll talk directly about Docker, because it has certain implications in design, and why we need a new system like this. We're going to talk about Docker build, which is about file, and again, why that leads into the system that we've created. We're going to talk about release engineering for the context of why the build system is necessary to kind of constrain the build environment that can become a Wild West. We'll then talk about the Docker related image build service itself, which is comprised of OpenShift, the OpenShift build service, known as OSPS. Coaching container build, which is the plug-in that allows us to tie it into Coaching, and as well as tie it into all of the tooling that correct your computers. No more, maybe not. Docker language build service, through our implementation, and how we specifically do certain things, then we'll get to your name. I also, I forgot, there's going to be a demo in this. We'll watch the demo outside of that. Okay, second question. So, if you guys aren't actually new, there's kind of an old talk, basically known as operational virtualization, with all the containers. It actually allows you to do multi-candidate execution of applications without modifying applications, and you can kind of confine them within main spaces. They don't, by definition, inherently provide security. We've added a lot of security grappling around them as time has gone on. But this is effective, that they allow you to confine different key finance spaces. So, containers. I like to argue that the tour was very complicated. It was unsophisticated, allowing you to activate an application and execute it in a context outside of what it originally should be. You lied to it about what it sort of process was, but the application thinks that it's built the world, and it was correct, and like, gone. Time went on, and there's a lot of non-sophisticated, or there's a lot of more sophisticated, but not necessarily widely used in the Linux world, or not available in the Linux. BSDGelts, Linux vService, Linux OVZ. Linux vService were things we had noticing. So IBM wrote this printing that allowed for use and access to things that are available in the form of user space to actually set up run times and confine them. So, a little bit later, the system began to spawn, kind of expanded on top of that, using similar system capabilities, similar API, similar components to the car models we were treating further, confining things. Interesting fact, the way that we did this, we wanted to test system data. We wanted to be able to test rapidly, iteratively, test system data without having to reboot machines. So we wanted to, like, spawn a new system inside of container. So 2013.blog loses Docker, and it originally uses LSE. So this is why I think LSE is more like the historic lineage. It's interesting, because Docker originalization of Docker, used LSE on the back end, that has been replaced by, I think, called linked container. And why Docker was popular, and why it's so interesting is because it defined a lot of standard workflow components. So you have this concept of an image, and then a container is an instance of that image. And then the image format itself can be easily distributed, and the distribution mechanism is easily hosted in your own environment, easily replicated, those kinds of things. So a lot of the things that were a little bit more prior parts to build your own, they kind of provided tools around how to do that. And that's kind of the truth. So 2014 CoreOS made Rocket. Rocket is a similar, or it's a different solution to a similar problem space. And that is, they created an application specification and image format specification, and they created Rocket as a reference in the case of that. Which kind of, the two efforts I think Docker and CoreOS led to the OCI, which is, well, yeah. So OCI created the OCI, which is the OCI container initiative. I don't know what to actually rename one of the four together, officially. But basically, a bunch of companies got together and said, hey, we need to have standards, and all this stuff, and we need to have things that are affordable. So now we can actually have OCI application images run on different execution engines. And execution engines in this context would be Docker or Rocket. The reason that's kind of interesting, in my opinion, is because that leads to this thing called Run-C, which is the ability to then execute unforeveraged container run times from images that you wouldn't be able to otherwise run in Docker or Rocket. And as a Docker engine version 1.11, this is actually how Docker itself runs applications. So you can actually, they actually took the components and they kind of like reworked their engine and everything. And then as a 16 container theme happened, which is a game in that can orchestrate local run-c execution environments, which is effectively how, again, Docker 1.11 handles this. And it lets you run with the name as they comply with OCI certification. Now, the big mystery lesson is basically just because this didn't just kind of naturally happen overnight. And Docker is not the only execution environment, but they are, without a doubt, the most popular one. And Docker itself has a handful of things that need to be taken into account, mostly just for right execution run time and also for image creation. So the Docker engine, once upon a time known as Docker Damian, still known as Docker Damian in a lot of the popular worlds, especially in man pages, documentation for how to actually execute and run the Docker engine, is your single point of entry. It's an API that can be accessed over the unit socket or remotely over HBS. Well, yes, pretty good, the only HBS. Containers are instances of images. You get to build an image feed. And images are built in a standard way using a Docker file. And that's interesting. That is what kind of led to, I think, the ubiquity of people thinking that Docker is containers and containers are Docker, because it provided a standard tooling so you had this thing, and this thing that you end up with, in fact, a targall of targalls and metadata. And then the combination of the targalls and metadata is how it's operating and what to do, and you get these instances of the running thing. SLA support is in Docker. Thanks to Mr. Dan Walsh. I'm still following why I keep, yeah. So people are setting fortune on it. No, man. SLA supports it again. That's all. Anyway, so photo lap-ins for isolation for storage and everything else kind of thing. So Docker adds a lot of flexibility around the original idea of this, obviously, it's a lot of personalization in this concept of containers. And the main thing that we're going to focus on is the images themselves, because we want to create them. However, we need to understand the difference between Docker images that are considered base images or platform images, and application layer image. So the base image is generally something that your distribution is going to produce through their standard offerings to build a mechanism. So there are Debbie and Arch, Sousa, Alpine. Everybody who creates a base image creates it from nothing and just slaps out a root file system and then ships it out. And then everybody else will create their application layer image on top of that. So the reason that it's called the application of the Docker layer image is going to be carved. This is for the federal contributors to actually build layer images that we can then ship and distribute from Fedora content sources. And I say content because they can be a combination of clothing content if you want to pack it up on the cloud or something into a series of Docker containers. So here's the basic Docker file. The front stands up, which, again, is going to be from your base image. Base image for everybody in the room. Also going to be Fedora, if not, we are always looking to improve. Let us know how we can get it to use from Fedora. Maintain an entry. Road stands is exposed, so it's those ports, add files, run things, command entry points. There's a whole lot of other things you can put in there. Stands as the documentation is there. Let's check it out. The one thing that needs to kind of point something out is that the flexibility for this is affecting the same as a shell script. Which is great. It's very powerful. It's very convenient. It allows you to get up and running very quickly. It gives you flexibility in many ways. But it also makes it very hard to replicate a build. If you think about building images in such a way that you can reproduce them and you can replicate them, that's very difficult and similar to Slack, in one of the road stands, there's a code like the dash command from bobslawsonproject.io and you have this binary image that's been built and you run it with hands based on that and bobslawsonproject.io disappears and you can't do rebuilds anymore because your install source has been destroyed or you can't do updates anymore. That becomes an instant problem from the concept of oh, I'll keep that there. Here's the output of the build and this goes into the show that each step spawns a new layer so each time we get a text on it's a new layer adding to the name for that new layer image and we actually do a swatch so we swatch a lot of these ones that do DNS installs. They don't actually need the back end DNS installs. Very nice. So this is the piece that I want to talk about in terms of what's interesting about the fact that it's possible for you to have not having a sensitization of your sources and not having a recipe by which we can go back or having an audit trail. So what is really interesting? Resengineering a soft approach of pipeline that is reproducible, audible, definable and deliverable. This is a definition that I gained off of we can't really find anybody who can define it better than this. This is by the aesthetic of the book. And it's actually that resengineering is between manufacturing software and small teams or startups and manufacturing software and that's the way that it's readable and gives predictable results and scales well. Which are things that we want to have for Docker which is just what we have in today for our PMs. We're going to revisit the resengineering point. I just wanted to touch on that because as we go through and talk about why we've made some of the design choices and why we've been put together this way, it relates back to that because we do want that to those components. So OpenShift. OpenShift is a container long-time platform. It's open source and it's developed by Red Hat primarily. It's based on a technology called Kubernetes which was originally used by Google. We'll add some of your creativity to it. There's two editions. OpenShift Origin and there's OpenShift Analyze. OpenShift Analyze is the product language and I just kind of tell it to you all here. OpenShift Origin is what we use in regards to the build. The reason we do this is because OpenShift provides a lot of really nice things. The main thing that we care about that we find very interesting in this particular problem space is the build automation. So... Oh, really quick. The reason... Okay. So OpenShift... This is huge. This is like a birth idea because it gets playoff movies down below. But you're masking your nodes and each node will run a series of pods and each pods will have multiple containers in it. And what that is interesting and the reason we talk about that is because it shows that OpenShift itself can scale so that we can actually inherently, by using it for its build pipeline, we can actually scale the build service by scaling OpenShift and then letting OpenShift schedule resources as necessary for its build. Okay. So this is a big platform and the advanced features are the build pipeline, industry, application life cycle management, CS, integration, binary pipeline, and triggers. So the triggers and the build pipelines are going to be very interesting to us in how we build all this. So let's see if you have no idea what you are. For our use as well. So because we are trying to build images we need these build pipelines. So the OSBS is OpenShift Build Service and based on what we do is we take OpenShift origin of upstream project called Tomically Actor and then OSBS finding them and kind of shove them together and this solution becomes OSBS and what we do is we create we we create a new build primitive inside OpenShift and what that does is it defines a new configuration intelligence system characteristics of our build because the build pipeline actually has it comes with pre-described methods of build that are going to be common for those people but it also provides a framework to define them. So we define our own and that will then as a side effect to find where our inputs can be. What's interesting about that is going back to input sanitization we can actually do sanitization of our inputs in terms of build period. So we have a definition of what can go over the build. We have our own shift for scheduling and again that goes back to this being on a scale build system and the same as being on a scale OpenShift. I want to say I think OpenShift Enterprise has a reference customer that's got something involved like 100,000 nodes 10s of thousands a lot of nodes. More of a story is we can see that the scale build system is larger than we have available hardware for in the world so we should be good there. So it presents a this defined component to the developer and what becomes interesting there is that we can then tie into it using common tools that we already have which is what we've done is to allow further contributors to actually build these kinds of things. So what's interesting for us is that inputs come from our rule so to get your proper source of Docker file get Commits and Builds as a new law of Build Group is a limited Docker runtime. So Build Group is a very just overloaded term if you say that word without context including less. What it means to us is that we create a Docker image that is pre-populated with all the things that you would need to actually perform a Docker layer which builds inside of OSPS so it's going to contain the Atomic Reactive Command which I'll provide in a second in a little bit more detail but Atomic Reactive does a lot of the automated process around the build. It handles a lot of metadata it handles a lot of the upload responsibilities those kind of things. It is an under-image container runtime with SCLs enforcing inputs and stand-times of what we should be able to face and another unbedded source for disallow it's file in-stream so we actually do file in-stream at the Docker bridge in the case so anytime the system spins up a Docker container instance the container instance's network is joined to the Docker bridge interface and that Docker bridge interface has restrictions so we can find it in every possible way we can think of if you can think of no one please talk to us we'd love to know if this has locked out as possible just like the Arcane Builders are we don't want to allow the Arcane Builders to pose these like randomly pull things from the internet we need to have our inputs stand-times so within this it is OpenShift Industries in-streams as input to the Builders and the reason we do that is OpenShift Industries are effectively pulling us to remote Docker registries so for those who aren't familiar with Docker registries it's kind of similar to Arcane only though it just comes with all of our Docker's known and metadata or not and they're strategies for pulling metadata and then image employers and those kind of things and what's great about that is as we build out our environment and we create more registries or we add more components so we can just update the image-stream pointer and it will aggregate the data and refresh metadata as prescribed in the image-stream definition so it utilizes OpenShift triggers to spawn rebuild elections based on parent image changes now this comes with an asterisk that we're still working on this but it isn't designed out, there's documentation out there for things that are being worked on in that space however that is a goal such that right now I think there's a locked up report out there that shows something like 73% of Docker images that are advertised to post web content are still vulnerable to hardware or something that are on Docker so if you're going to have that on your com, anybody can upload anything they want the one end that's happening won't be able to abandon that and then you're just pulling these binary blogs and just using that face value and that's fine if you trust where the content comes from we would like to be able to build trust around our industry and that our content comes from the content you would pull from us is from an electrical source that isn't there's bugs and they're flying down on us there so as I'm a reactor there's a single pass-off to build tools inside of Constraint, Builder, and LSS now the inside of Constraint and Builder and LSS is something that we do you can use it on your face value you don't have to use the way you use it but it's a very, very cool standalone tool on its own and we use it with great success inside of OSPS so it automates tasks for all kinds of things and it does sort of like plugins there's plugins to push images you can inject repositories you can change the base image so you can actually in your configuration file say okay if the front image is fedora it actually needs to be high-registry.food.com flash to build or something and you can, without modifying the origin the originating dot for file you can kind of do that swap and inject you can master registries available inside of the SLA Builder and you run simple tasks that will also automatically upload registries and it's just a whole bunch of plugins that is less clear because they're not exactly directly related to what we do but it's it's a good tool so we use that inside of OSPS it's actually originally created from the same upstream project that created OSPS so this is a side note OSPS client and the comic director comes from project.org there's actually an upstream, it's all open source fedora do not create them we're a streamer and a contributor but we are not the creators we're fans okay so gaining update so with audio testing time we're actually working with the field QA group right now we want to move audio testing into task itself we're going to have it all fired off with our fed messages and then we're going to have combining images from a candidate or yeah, no, candidate I don't like the word for the term yeah, for a candidate to then how do you want them mixed and that's the red term we'll have to tag the image and move it all as an unmodified component so the thing that is tested is released that kind of thing so through implementation so we have laborers don't play tellers that is hopefully all of you you will then interact with disk it just like you do for our canvas but we can name space disk it so now there's a package for rpm slash so what I'm going to do is copy it all right now we need to help before we do that we'll quickly use copy it so we can actually look at rpm's namespace and actually look at the rpm's the step file and source code the pass set and everything then we can do the doctor namespace and whisk it and just get the doctor file and service and then eventually add the test and doctor will add any version of the test it will be a test directory and those kind of things so just then you will execute the container build command which will fire off coji and coji being where all can build topping, container build, iso, everything everything you have to do is our central hardware build all the things and that container build plugin will be in there and this will go out and it will request that lsbs generate or schedule this build and OSDS origin has the rest api which is where the request will originally go in and atomic reactor will be run inside the build root and it will perform the build for us and then all of this is handled through the OSDS client api and the reason there's two apis is because OSDS origin and rest api does a whole lot of things now OSDS client effectively allows us to use open shift in a certain pre-prescribed way for our exact build type so this presents the components you need from the grand open shift and we can just use it in the pre-prescribed method we want and it also provides python api bindings which fiddle with those pythons so that will be a threat now the build will happen and then we will actually immediately shift the build out to the candidate image registry right now there's actually only one registry we have a project underway to expand on that but it'll send it out to the registry then we can immediately pull it and test it and those kind of things and then we'll go out to the users now the registry will also have data imported into Koji there's component Koji called a content generator metadata build what that is is that Koji will fire off the build and then import all the necessary information into the Koji database for historic reasons and for being able to build, re-production those kind of things which brings it back to the wisdom sharing aspect of this we can then reproducibly create these images because we will know exactly the list of components that go into it we will know exactly the git commit id from diskit where its sources are originated and we can verify its input its content input sources because we have limited the build routes to not allow external sources to be injected so we know where it originated, where the providers and we can then re-create it if we need to and then the users will download it so diskit, so again branch is a thorough release I suspect most people know that but just in case Fed package is showcans who's using Fed package who's like kind of maintaining stuff so Fed package for the rest of you don't know is a package maintainer helpful it allows you to kind of navigate so get different things and do it in such in the context of the content you can initiate builds, local block integration Koji Koji is our qualitative build system everything is built there everything is built there one way or another everything from usd images to cloud images so container builds is a plugin that will orchestrate between koji and usd s and then we have our registry which is the docker upload at your point now before we go into questions we're going to do some demos do you do you oh right uh block size isn't good currently alright now I didn't hear the display which I think is a mistake now do you do alright so let's just look at tree 1 why do you I can't remember I don't use tree at it I think oh oh yeah it's going to be bad do you remember okay thank you okay so uh I don't have cool enemies in here let's just let's just forget about them in there no it's fine okay so in my that's my response story I have I have name spaces so I have stage which is actually my stage inside of stage I have a docker art game but we have docker art game and we have a namespace such that we can actually go into um so let's let's go into docker so I'm just going to do this so we can then fed package clone docker cockpit oh and I said real quick I would go uh oh why do you hate me okay so it's really cool what you are that gives you the ability to manage your servers it's not crazy at cpair it's actually it does everything or debas and it has some really nice system it's very minimal footprint um we have shell on your system and we can launch docker and it will orchestrate kubernetes and openshift and all kinds of interesting things um so it was broken out at the end oh right yeah alright so we have a namespace here so we have docker cockpit which is going to pull the docker component um okay now we have an atomic install atomic world atomic install so for those who are familiar project atomic has an atomic command and the atomic command allows you to actually install um containerized applications and use labels inside your docker file to run certain commands at install time at uninstall and those kind of things so we look at our docker file there's a lot of filters and the ones that we care about from the aspect of being a docker language uh maintainer is um the bc component which can be a browser so that users know better to report issues um the name which is going to be docker cockpit uh version uh which will actually come from the version label which is down here I'll release an architecture now eventually we want to support all the architectures right now it's at 64 only uh for handful reasons mostly uh the docker registry so docker itself will actually run on um multiple architectures uh docker registry isn't having a really well defined standardized way of how to distribute multi architecture um images yet uh so we're trying to sort that out and uh but for now we we're doing a dock study 64 um and then uh so then we have the label for install uninstall run and this is where the atomic command will actually use these labels uh to perform things but uh going back to it but we'll just go ahead and um I don't know uh alright so the end uh release version is where these are popular I'm sorry I thought they came from labels down below so we're going to bump this for sake of uh the the general we'll go ahead um I'm sorry I have uh that's get add I have map those because I'm lazy um I'll go ahead and type up it might must and um release for demo get push the budget the package container build is that the official package uh yeah yeah no I'm I'm doing this yeah yeah 100% live on Koji sorry is that the release package version or is that something that's the package I the package that they're running here yeah yeah that's yeah that's the official yeah that's for 24 is fed package yeah yeah so the question is if this is uh the official fed package or if this was like my own for now we got the packages that are needed into fed package and they're part of f24 um so we can see here that we've got our task info um which has our URL and Koji which everybody should you know kind of recognize it's going out and scheduling the create container job which this is firing off to um talk to us yes now we will go ahead and uh uh uh copy me I'm not using the plus no more go away nope nope stop it okay so I failed I I I I I think I just failed what happened it might have been it might have been it's okay well it's been a while that's helpful yeah well there's a lot of info and uh and uh under uh show results you'll see uh or should when you do a build you should see where it's available with registry and you can actually go ahead and just grab um that URL um so it's time to uh hand forward today so there's a unique identifier which is uh created just based on uh dating I think epoch time value um the actual uh name version release tag and then uh we'll actually do an update latest for those of you latest is a it's kind of a natural tag in north space um such that if you don't provide one it just assumes latest if you don't provide a tag of any sort it just assumes latest latest it's meant to always track the latest burn of your image um so if if somebody so if you do dr pol fedora um dr the back end actually assumes that what we meant was dr pol fedora called the latest um so if you just did you know dr pol let's just go for a project or slash slash copy it would just assume like this anyways these are where you can get those things but we can actually drill down um run better and see information about the container build and uh we show results which will have uh information about this and then the build block which um it's very long and verbose and for anybody familiar with um atomic um it's good anybody familiar with atomic reactor this uh we'll probably look familiar but this is kind of the the debug output and shows you all kinds of stuff um and then you well then you have to pull it up let's see how long is it yeah I was really hoping to show the content generator thing yeah there we go okay so um the content generator import uh let's go show me I wanted to see the no the um so we thought that the input gives a list of all the pms that go into the build group and it's just not showing you them now oh come on I have this all pulled up before the second and it's just gone oh no okay anyways um the content generator metadata import will actually give us the full manifest of the sources that went into the build group including the list of our pms that came from um the repository named version releases so that if we ever needed to we actually recreate a new um cozy build uh repository to then feed into the build um include and then we could also provide the specific um git uh git u r including the hash value so we could uh go ahead and completely recreate using the same set of inputs that same image uh output um I won't guarantee binary compatibility bit for bit uh because I don't think anybody does that I don't think anyone's trying to but uh he's actually doing that and uh there have been discussions on whether or not it's actually a goal with uh chasing there um so that's the workflow I will find out what just broke with my demo um but we're hoping to go live with this very soon it's in production in terms of the infrastructure um we're waiting on some some kind of no uh bookkeeping components to go into bugzilla uh we have uh packaging guidelines that we're iterating on if you have any input on this please join me through our file working group um either in our meetings on mailiness because that's kind of where we're driving this the container stuff containers uh containers and documents seem to be very um trend when it ties to cloud topics those kind of things so that's that's where we're trying to do those yes yes okay so the comment was the one that's done you can actually go into uh those those uh and enter okay uh you can you can go and okay so they're probably very soon maybe long rest walk for kicks we'll just make that now sense of the world that you can actually create these things now we have guidelines and then that's go um so we have everything we all really lost in place we just need actually uh I guess everyone to fix a lot of work and then uh and then release fun world now I will say with caveat um we will not be unleashing the floodgates initially we're going to kind of roll into this slowly just because um we don't yet have any real knowledge of what our capacity for builds are built to put in at this time um that's actually I'm going to start working on following two weeks is uh the scale of the plumbing of our build system um to to be able to open the flood data so uh there will be a little lifetime before we are able to just summarize all the things that we are okay yes sir um that's not a thing so building the mirrors how are we going to manage our registries those kind of things that's another initiative um that we're working on Randy Barlow uh he's going to uh probably be technically on that I'm going to assist the best I can um but we are uh we're going to work that one out too because mirroring that proposition yes uh so a moment in question uh coming in is for anybody who actually wants to learn the labels and everything else that we need for building the data for that that's part of the workshop I'm doing on Thursday um so we'll follow up on that uh the question is um what can we do to make sure that people reuse layers as much as possible um putting guidelines I mean we we need to have so there's there's currently a set of guidelines out there that I wrote um purely because they need to be written not because I think I have uh the best uh guidelines available to give to everyone but somebody has to write something down so I did and uh we get a lot of feedback so if anybody has feedback please provide it because I love some of these ideas that are going to mine um and basically we just we're going to go through the similar work that you have for our team so you'll play there or review your class and we'll go in and review and make sure that you've got files on the way and that your sources that you're going from uh losing to doing that and uh things that we can't distribute and this kind of thing um so adding labels and things I'd like to get into our our our guidelines there's a handful of labels that are already listed there full of their name, version of release and bugs though we've only included the required because we're not providing the build system we're actually kicking the build out and sending it to fail because it's missing missing any of the required pieces um yes you mentioned packaging guidelines and how are these going to end up in stores normal packaging guidelines obviously I care so there's a there's a it hasn't gone live yet but I have there's a set of guidelines for Docker that is going to be separate from RPM um they are similar in workflow but it is a separate set of guidelines mostly because we understand that the packageer isn't inherently going to be an expert on containers and vice versa so we who knows where our containers are building Docker files isn't necessarily going to know the intricacies of our two spec files and we know the things so um we do want to keep them in parallel um but not necessarily lunch together I just don't know because we already have a lot of workflow and stuff for formalizing packages in general we don't need to bring back we want to be able to separate um yeah yes well it doesn't need to be decided less than what you do there wasn't a couple it doesn't have to be included in the MBC it's not going to necessarily make the matching communities that's the point but it does have all of the structures so yeah and it's so when I did the initial I sent it to the Belis um I think I got one person who gave me feedback so I submitted it to the Belis again and I think I got one or two more people in other words you turn it off and you can do it in the fire hose right I announced it from the mountain top I do uh to announce it um but I'm happy to follow up on it nothing is set in stone but a big thing, so a big thing about not only Docker itself but also the build system is all of this is in Flux because Docker as a technology and also as an ecosystem as like a user community is constantly changing and iterating so we will be doing the same with the build system and our guidelines and other things so if you have ideas please let us know nothing is set in stone that we are open to the world of change bigger, faster, iterative, all of those goodness yes sir so I'm really glad to be able to go through this part of the report on MBC thank you for us uh either originally or was on the team originally wrote a comment which is a huge component of this thank you I was ready for it please I just have to ask so I'm planning to go through the Docker uh I'm going to explain to Docker how people do things for the elderly it's really not very easy yes so the third part of the working group who maintains that set of things already has uh it's being tracked that once we are able to we're going to migrate all of the footer of our Docker files into this and we're going to um tie in automation to where when the Docker is all released it will be mirrored out to the hub so people don't want to use our registry they can still open the hub and they won't get the updated content so the question was all of the things that are under the footer of our namespace on the Docker right now will we be doing something to uh update those and yes yes we will that is that is on the roadmap that's in the the planet it's just now getting there and that's our thing too so we will have our registry and we want to do like nightly live updates for people who want to pull the latest and great space image um but we are going to do a tie-in to do the automatic upload into the hub so I was so asked if I could hear a man try to get out or do you know get out it's not yet out officially um very soon hopefully buddy will leave maybe next week yes very soon yes um yes we can that's a config change we can set the default in our package to point to our registry first and then like fall back and um once we have the scale out mirror uh set up for registry we'd love to do that so the question was can we default to the registry for the content from the Docker package to the door and yes we can it's config change and we would like to in the future what about scanning want to scan the images before you release them so scanning we would love to we haven't gotten there yet um yes so the atomic scan command the um uh open scan integrations those kind of things we'd love to do that for security scanning um Patrick Patrick in the room alright so Patrick uh he has brought it up to us that's something that we do want to do uh it just hasn't been done yet what will probably happen is in the test phase that will be a component of the test phase we'll watch you do a scan before it goes part of the public we ask your point so the question was are we going to do security scanning on the images and uh yes we want to uh yes so uh so the current draft guidelines say that anything that goes into the pain has to be available in our pain so what about the only thing that is that aren't available in our pain not available yet no we can't do it yet the bill system will literally reject it okay bill system is rejecting it do what um no but you so you can you can gain the system uh by just dropping blogs in diskit but we can audit that and yell at you for it but uh yeah you can't you cannot like add remote things because we have it yeah yes probably last question okay so are you planning to plan this and do that so we can only go on it and we'll audit it no so you can actually do it so you can run it through our 24th container and run it through our 22 if you want with doctor like that's kind of the goal of doctor skill to allow that to to mix and match absolutely alright that's fine thank you all so much