 Hello, my name is Michael and My topic is For the lightning talk is rpm spec file templating with rpkg utility preprock and rpkg macros So this is pretty long title. I Have only 10 minutes, so I hope I can squash it in but it should be quite simple so first Rpms rpms spec file templating that's the main topic here So let's look how such template can look like you can see This is a spec file, but it has some additional syntax in there those brace triplets and In those triplets there are basically some macros that render Content or some parts of the spec file dynamically from some other data in this case from git meta data Of the repository where the spec file is placed in so those macros will basically read git meta data and for example render a repository name or In the case of this macro It will output contents Of messages of git annotated text so you can leave this you can store your change look in git annotated text and You can actually when you pop menu create a new tag. You can populate The message that is going to be stored in this stack from individual comment messages So this way you can reduce manual work that Is inherent to maintaining spec files. You can have some data stored in git and Take those data and put them into the spec file in case where those data are supposed to be the same So it should basically make packaging easier So I will try to explain how it works and I will try to make my work from bottom to up First I would they are there are three Components the most basic ones is prep rock, which is basically the template engine implementation itself, so it doesn't have anything in common with RPM or with git it just Reads some text files. It looks for the brace triplets. It executes the contents and replaces Replaces is TD out of this execution With this with this tech So it just it works like bash command substitutions. Basically it executes the command and DS to the out becomes the result becomes what it what is put into the text in the end So this is also a template that prep rock can render so The command looks like this. I'll just get this template into prep rock and Is TD out goes into info Takes their file done I have that file open here I will reload it and You can't see finally my email address there. So this is how prep rock works now our PKG macros are macros for prep rock that prep rock can execute And those macros are intended to be used in spec files and And to render Some parts of the spec files dynamically So I've implemented some set of those macros which are Git RPG macros. They basically Do what I what I said in the beginning they read git metadata and Render for example version or git change log RPM change log or name or release into the spec file So Implementation of those macros is in the file macros macros the git bash and I can source it I can Pass it to prep rock as an input parameter where it should look for the macro definitions And this way I can learn prep rock to actually Be able to execute those those kind of macros and to process text files that contain those macros and I also Pass some environment variable there and put their path This is necessary for correct operation of The git macros I've implemented So I will run it and it will print the rendered spec file to STD out in this case and you can see This is different spec file than I've Showed you initially this is the spec file that I've rendered the template for is here And it's a spec file of RPG utility itself. It's a bit more complex But you can see I Also use git there change look macro here and This got Translated into a full RPM valid change look in the end and also um name and version Has been rendered in the beginning our package util which is base name of For base name basically remote url of the current branch and you take base name of it you strip Git suffix and you use that as name of the package and version is derived from tag names so finally What is rpkg? rpkg is a tool a package or tool that uses preprock uses rpkg macros and Glues it together and provides some comments on top of that for example a comment for building srpm From a spec template So you can change the operation By rpkg you don't need to first render the template and then call rpm build you can just call rpkg srpm and it will do all the stuff, but you can if you want you can do all the steps manually as well rpkg and VR renders name of the package full name the version and the release included and rpkg tag creates a new tag So you can see Editor opened and I can edit the Content that will be put into the resulting annotated tag So I will just use this Even though it's pretty ugly because it has this macros tag which indicates Into what part of the code the message relates but let's skip this it created a new tag and It produced new spec which is rendered from the from the updated state of the github repository and and You can see that the change look is updated and Version is also nice now. It's just 3. prototype. That's it. Thank you So the main goals of the project are improve household router sec cyber security It's marketed as more than just a router the open source center of your home Both hardware and software are open source and it's available Publicly accessible from github instance of czatnik To improve the home cyber Security what they try to do is to most most important things of our goals or What they what they make them special compared to other authors first is that they provide Automatic unattended updates and security patches and they declare they will do that forever This means that for example, if there is any Issue with SSL or with the router with kernel because it runs on open WRT They usually provide an update within metro for hours The second. Yeah Okay the second Distinct thing is that they gather network intelligence They detect new threats. They collect that intelligence from the the routers and act upon them It means they provide the rules for the routers Which can be downloaded Automatically and protect against newly discovered threats again It's usually working in metro for hours From the detection to creation of the rule and pushing them to the routers the that part of the the rules Pushing is automated But to gather the network intelligence it requires explicit opt-in. They are very concerned of end users Privacy so they require explicit opt-in even though the get gathered information is Strictly anonymized. They don't collect IP addresses So the another marketing point is that they they say it's that center of digital home The routers are pretty versatile and extensible extension kits Are usually LTE NAS hacker pack and other history the tourist project started at 2013 and There are free tourist products so far The first tourist router one series tourist omnia and tourist mocks Omnia it's a Compact design compact device like you see the mocks its unique point is that it's it's modular The modules are possible to connect into the z-chain and as you see even the plastic box is modular History tourist router one it was invite only project for 1,000 users It was a pilot proof of concept. It proved that the cybersecurity job goal is achievable Router was provided for free with two years lease contract for something like one check run But the manufacturing cost was pretty expensive. It used power PC CPU Because of the very high demand They decided to produce tourist omnia people were going to People were approaching them asking hey, we are going to pay you for tourist one. We want it, but they were not able to To provide it in the higher quantities so they decided to launch crowdfunding combined which was Very very successful. I believe they get something like one and a quarter million of us dollars on indigo goal while the Initial goal was just one hundred thousand dollars and omnia is currently available in commercial market for something like just under $300 without tax Tourist mocks it should be cheaper because tourism omnia its cost its price It's prohibitive for regular users. So it should be cheaper modular again successful crowdfunding campaign and Delivery is scheduled this month. I hope to receive it. We'll see The vendor is CZ Nick It's long-term goals of the project and continuous updates requires stable funding R&D funding is not achievable via cell of hardware because it's already Very costly and the cost covers just material costs So R&D is funded from other sources The CZ Nick is not profit organization founded in 98 by academics academic members its founding member of Internet peer it's Internet exchange in Czech Republic. It's top domain dot CZ Registrar and it's established open source vendor They produce bird not threat runs the national SIR team Omnia hardware CPU is Rather powerful. It's it's a high-end in terms of so her alters dual core arm It has two gigabytes of RAM eight gigabytes e MMC flash Lanswich chip with five one gigabyte alternate ports and another one port Which is shared with SFP slot? So it can either act as one port a turn at the one gigabit port or SFP Also possible to connect mSATA or another mini PCIe extension cards It has RTC with battery crypto chip USB GPIO pins SATA manufactured it here in Czech Republic Included extension cards if you buy what of the router for those three hundred dollars. It's 2.4 and five gigahertz Wi-Fi cards free development antennas The only part which is not open source is a binary blob from the vendor to five gigahertz card Optional extensions enemy mini PCIe or mSATA controller As long as it fits in size and is supported by Linux it runs on open WRT So there is there is a Linux kernel What is provided by CZ technique and tested to work is for glte modem and to port mSATA controller Software default is tourist OS. It's a spin of WR open WRT The the OS is same for tourist one tourist omnia and tourist mox. There are also deployable Images of Debian and Opensuse Vanilla Linux kernel shell run and the runs Not all devices are fully supported yet. For example switchip switchip can use just one of two one gigabit links otherwise The support is is full in kernel So what they add on top of open WRT is automatic upgrades and patching Butter for survival system with automatic snapshots at boot which are used for recovery in case anything goes wrong You can use any standard open WRT packages and repositories They also add additional packages for the dynamic network protection as mentioned at the beginning GUI forest is their own GUI which is even Easier and it's it's designed for standard users regular users who cannot use lucy because it's still too complicated Performance benchmarks they did IPsec 300 megabits open VPN almost 100 megabit Seasonic claims that they measured almost one gigabit of throughput with not turned on and 300 450,000 packets per second recovery capabilities are very Broad you can roll back to the previous image you can factory reset you can refresh router from USB Boot from serial console. It's very hard to break that router This is a board and that's it Questions there is nothing like High capacity battery inside it is extensible in a lot of ways, so it's up to you Okay, thanks. I Am Vasek Pavlin. I work for redhead. I studied at this university and this is the brief history of the rubber ducks at fit So first the question who hasn't seen the glass box with the ducks over there in E Hasn't hasn't okay, so we have like 10 people So after this talk, you need to go there. You need to look at it It is a one in a lifetime collection of rubber ducks But I have the rubber in quotes and that's very important because I'm gonna talk about it a bit later So this talk is not about technology, I'm sorry about that I know you came to Defconn mostly for technology, but there are also non-technology talks here, so it's fine. It's fine So what is it about? for those who didn't see it, this is the glass box with all the rubber ducks and There is a story that they say at this paper and this like it brings some Controversial conspiracy theories about that. So let me tell you what is there. So it asks why the ducks, right? So the ducks the rubber ducks in the IT world serve as a help for a programmer if you want to debug something and you don't have anyone to talk to You can talk to the duck and you explain the problem and by explaining it. You will realize the solution, right? It says that this is but it basically says that this is why the rubber ducks came to existence here In at fit because this is a school for programmers mostly so that that's why we put the rubber ducks there in the fountain it's very Very well explains that the rubber ducks are appearing in the fountain after people finish their Final exam so either bachelors or masters. I would prefer people do that just for the masters when they are leaving the school but That's possible as well. What I'm going to say about that is that this Explanation why the rubber ducks is completely wrong This is not the reason why the rubber ducks are here at fit the reason is that I wanted to put something in the fountain in 2012 we Had the idea when we were finishing our studies here with my friend here We had an idea that we should probably do something crazy, right? so we thought of going to the fountain swimming there Grabbing a beer taking a chair Doing something fun like that and then we talked to this gentleman who is like maintaining fit keeping it running making it as Awesome as it is. You might have seen him roaming around He is there. Okay, that's good roaming around Hallways and like fixing all the stuff that is going on and we somehow mentioned this idea to him that we want to go swimming there And he was like are you crazy? I know what I'm putting in there I wouldn't ever ever stuck my feet in there So like okay, maybe it's not a good idea to do that So we're like, okay, so let's let's let's figure out something else And I'm not sure who actually whether it was me or my friend or my ex-girlfriend Someone came up with an idea. So we wanted to do something in the fountain So let's just put something floating into the fountain and what floats best is the duck The catch there is that this is not a rubber duck. It is a very hard plastic duck So it has nothing to do with rubber ducks of programming and stuff like that It's just a duck that can float in the fountain And this is actually the free for the first four ducks that were there in 2012 This is another proof that there was nothing else in there So basically this explains that there is the story like it's because it's a rubber duck debugging Blah blah blah has nothing to do with that I think Mr. Yutichek a lot because he took care of the ducks very well. So he put he took them out For the winter he put them back in the summer. I Actually the second year in the 2013 after I finished I was like is I was curious whether the ducks are still there So I came back and I saw my ducks. So I just like use the marker again to like put To to let my my name be more visible in there And I think that 2013 2014 people started to put their mordak So he always like fished them out and then put them back and then it was too many ducks So he doesn't really put them back. They just appear Organically, let's say so I would like to appeal to fit to the University Pay some tribute to the oldest duck in there the old guy the big one He's buried under those other ducks and he probably deserves more credit for like leading this revolution So there is this there is this sign that 2014 and older I think there should be like 2012 one and it should have its own shelf I'm not I'm not like pushing on anything, but it's just an idea that I would like to present So if you if you want to tweet to fit vote You might want to do that and like make them do something about it because I think it's a it's a it's a Bad that it's just breathing there hey Like I don't know send the picture and say put this duck out of that mess and just like make it be visible much more It's at FIT underscore VUT I think Yeah, just saying I mean is it recorded. Yeah, okay I'm done. Well, I'm done. Thank you very much and there is more stories with the duck So I can talk about so find me the party Yeah, I'm just gonna talk from from memory and okay So basically I wrote a project called Jenny and what Jenny allows you to do is it allows you to run your Jenkins file directly on your system and There is a reason for that The point is if we're actually using Jenkins, so who of you actually uses Jenkins and tries to write stuff Nice, okay. Good. Good. So you're in the right spot now The thing is there are some problems if we're actually using Jenkins files So Jenkins files appeared with Jenkins to and they allow you to write your your whole build pipeline into a single file Which in my opinion, it's amazing because you can actually specify how the project is being built There is a problem with that though that you can only run it in the Jenkins server, right? So what's happening in your system? You would actually set up Configuration which is kind of what you would use for development. So you set up, I don't know a Python server or wherever you're actually using to build a Maven builds one and so forth but this is actually Different than the build you're actually going to use in the build system and this was this made me think because well Actually, I'm not using everything from Jenkins. I'm not using plugins And I strongly recommend against that even the Jenkins people They are not so happy about them. They have like over 10,000 plugins. They cannot maintain them So I'm not using plugins. I'm not using any kind of crazy construct. I'm actually building docker containers To set up my tooling so the first stage of my docker of my Jenkins pipelines would be just build those containers And afterwards I'm actually running the compilations, right or wherever I have to do tests so on and so forth And in the end I published them so Since these are kind of my workflows, I realized well actually I can I can use I think called shared libraries Who of you know shared libraries from Jenkins? Wow, you should really look into shared libraries. That's super sad super sad So the reason why you should look into shared libraries shared libraries are projects which can be outside of of your Current get so you can have actually a different get your poison story and you can extend the vocabulary of The Jenkins pipeline so instead of having simple We might get some slides soon. So instead of having super simple Commands like sh or node or wherever you can actually define commands that aggregate other Jenkins commands what it means is you can actually run a few shell scripts you can run Another you can run another build inside so on and so forth and you can do that and you can parameterize them as well Okay, so you can extend your vocabulary so you can have stuff like Run tests, you know work whatever as a specific word So if you do that you can also realize at some point Well, I can actually move my whole pipeline right into another get report story and then I can have them cross projects So my Jenkins files in every single project that I'm using It's actually like 10 lines of code and it doesn't a crazy amount of things Right it like including builds and so on so right now I managed to have over 20 projects all of them are having their own pipelines full CICD Feature branches for everything everything happens with pushes and I would have a beautiful be more to show you That would be nice, right? I'm using call Mac. I'm using call Mac for some reason. Yeah, sorry Yeah, that's another fun thing So it's great. I know the benefits are undoubtable, but I'm not skilled enough to use it It's okay. So the point is I'm going can you hold my Okay, thank you So the idea is what if you can actually move the whole pipeline in another place and what if you what if you can? Actually run it locally Right, this is the whole point if you can do that Then you actually have the capability. Hey, did it appear? Yeah, I did Yes Now only yeah Unfortunately, I'm still struggling. I just installed Ubuntu 18 And the great great experience. No, that's not gonna happen. That's Yeah, no so Okay, now we also have this okay, so let's I'm gonna go straight to the demo because I think I have like two minutes left The point is you can actually have something like this. So you have The stages and so on so forth if only Vim would refresh and Yeah, this is this there's no text in here. Good job Vim. Okay So you can actually have the pipeline and you can actually just run it so you can do something like Jenny and this one has an action error. It doesn't check us out to the code You know so instead of pushing it randomly to my get server I can actually just run it to Jenny and this one will try to execute it locally and It's gonna fail because there is no Jenkins file and of course we can commit it. Oops Actually, let me undo because this was the checkout we can undo we can do it inspects So we can actually see what it's in it's in that Jenkins file we can actually do all kinds of analysis including for for shared libraries. So in my case if I would have a A Pipeline this is how a pipeline would look This is how I'm using it from outside, right? I have a full pipeline externalized in a shared library So my project for example of Felix, which is a Python thing which at the end builds this binary It's basically this and I have another Another binary for example if I want to do another Python project. I have another one, but it doesn't matter You basically just define, okay, what's the name of the thing and so on and so forth and the whole pipeline It's actually shared Across the things and it's in this beautiful project called Jenkins lab or ever And in here I have a bazillion Steps for the pipeline and what I can do is I can actually go to the project I cannot drop the mic now, but I'm gonna try to show you kind of how can I actually inspect it and see Whether he's going on I'm calling Germanium build monitor usually until I came up with the name. I call it Germanium something Okay, so this one is going to run inspect and what's basically this one I'm going to do is going to evaluate The Jenkins file and it's going to show me all the steps I can select what steps I want to run and so on and so forth But these ones you can probably do it can check it on a roll The whole point is you can run it locally whenever you do get to check out or SCM check out He's not gonna actually check out is just going to copy your thing So if you have other changes besides the Jenkins file, they will be applied so you can iterate much much faster This was it basically. I'm Josh Kinlaw. I'm a senior software engineer for Red Hat I work on Red Hat Insights and I've worked on the deployment pipeline and had a good time with that so occasion of familiar hybrid cloud is a combination of a public cloud and a private cloud and so the public cloud Allows you to serve content to the general public whereas a private cloud allows you to serve content to a select group of users and So why I guess you might be asking the question Why would you want to serve if you can already serve to everyone? Why wouldn't you serve all content to everyone? So it's nice to have a separate production server where you get all your Application content to the public and then have another application another server that allows you to Run your CI environment and your QA environment and then it's nice to keep that separate from the public So that if you don't have something implemented like SSO completely you can still Test your application and different features in there. So that's a good reason to have it in the private cloud So let's talk about how you could do that so deploying to the public the public cloud is If you're hosting your application on or hosting your repo on github the way we are then you can use an integration service like Travis CI or Jenkins to Build that application and then serve those files to the public cloud pretty easy with like a Jenkins server Jenkins server could take those files and then just push them to your www directory or However, you want to serve on for the private cloud if you're if you have your files on a GitHub repo you can't really communicate with the private cloud as easily you can't set up a web hook inside of github to communicate with that Private cloud and you also wouldn't want to push them straight from Travis CI so The way we looked at it was uh, let's go to the next slide Okay, well if we can't get the diagram I can still talk to her So the way we looked at it as we took the application server on our private cloud and we clone the separate repo with the build files in them and then watch that repo and every time the build files change We update the private application server. So If I could get the diagram it would be a lot easier to visualize Okay so the way it works is We have Travis CI build the application from github every time that the Either the production branch the CI branch or the QA branch changes and then it pushes that branch over to the build repo Which could which is just a can like it just houses all the build files for that application and then from there the Private or I'm sorry the yeah the private cloud application server watches that get repo the build repo and Anytime it sees a change it pulls those down and then it copies those files over to the It allowed copy them over to the dub dub dub directory so that you can serve those files out of the Application server at the same time we have a web hook on that same build repo and that web hook will anytime But the production branch is updated it will push that those changes to the Jenkins server in the public cloud And then that Jenkins server can then r-sync those files over to the production dub dub dub directory so we have We have the application get get repo and then we have the build files repo And then we have the the public cloud and the private cloud and they both communicate basically with that build repo Yeah, I mean it's pretty much it from like a simplistic view on top of that a problem that we had is that we have multiple development repos that are contributing to the same application and Rather than changing the build process in all the different repos We separated out the Travis CI file and the Jenkins file to a separate repo And then from there each application will just curl that Travis CI file each time they change in the build process And then from there they can use that same Travis CI file. It's being hosted in the the build files repo so that Anytime we want to make a change to the build process. We don't have to go and update all the different build repos We just update that file inside of the Build files repo so it kind of simplify the process as far as making changes across multiple applications Are there any questions on that I think that I think that pretty much sums it up. I have five minutes left. Okay Yeah, I mean that's pretty much it I had slides to go into it further but going off the head it kind of shortened it quite a bit I'm sorry Another lightning talk No, I'm sorry, I don't have another lightning talk Yeah, hi, I'm Valentin. I'm working at Red Hat in the container runtime steam. I just recently joined in December and I Had some old slides, but my notebook battery ran out so I prepared some paper ones and That would be cool if this would work So there seems to be a tradition in our team to have live demos and show some bravery So yes, awesome. Thank you. So I try to do it like this first Does it work? Yeah, cool. So this is the title speeding up pushing and pulling of container images It basically Should describe what we've been doing in the containers image library Recently and also where we want to go Related to how we distribute or can use the library to distribute images So the library in question does it fit? Okay, gotta scroll a bit. Let's do it like this Let's do it with animations. So the does it work? Yeah, cool So the library in question is containers image. It's not only used in scopio, potman, cryo and builder, but also in in tools outside of Let's say this community here And it supports various kinds of transports. For instance, we can push and pull from a registry from the container storage From a directory. So really just a file system directory So the images are exploded on the file system and also or S3 and I may have forgotten one or another transport Docker daemon. Yeah, okay. I had to mention that at some point. Okay So also the Docker daemon the idea was To speed up pushing and pulling which basically in this case involves the registry and the storage, right? When we pull we contact the registry which in the end is a web server We pull some data from the web server. We explode it into container storage the other way around we push it to the web server We wanted to do this because we've run some basically benchmarks and prof profiling on our tools and we figured out that especially pulling was a bottleneck and Now it should explain why because the code was serialized So if you execute code in a serialized fashion one after another it takes more time than if you paralyze it So we were looking into how we can paralyze it. We succeeded in at least Pulling which means that potman 1.0 now parallel or pulls in parallel And we also found a better library to speed up or basically Compress in a multi-threaded fashion. So now we can pull 50% faster and this is pretty cool We vandered the code also in builder 1.6 in cryo 1.12 and 13 and in the latest scopio as well however Sorry for the slow animations However, the pushes are still serialized. Oh, sorry, you can interrupt me anytime feel free So the pushes are still serialized by locks in the container storage In containers storage, we don't have the benefits that for instance the docker demon has so whenever Data data is written. We need to somehow synchronize the accesses to the data that we want to protect or read and write so We don't have this for potman or for builder because we have a demon is architecture Right fork an exec model more traditional fashion and we didn't want to have a big fat demon So we had to somehow break out the locks onto the file system So how we did that is basically using locks or f-locks Which in this case serialize all accesses so although in theory containers image can push in Parallel to a registry we still we still need to somehow read the data first and This is effectively serialized So the idea that we have now is to somehow transition the f-locks into a read write lock So I tried to animate it here It should demonstrate that we can read the layers in parallel and the right access Sorry down here Has to wait until all the reads are finished We also did some initial benchmarks which showed that if we do that we're 20% faster than what we are now doing Which would be really great For sure we compared ourselves to to docker the cool thing is we were or are already now pushing faster than docker is Pulling with the recent changes as well, and this is pretty much it. Do you have any questions? Sounds good. So that was a lightning talk. How many people in here have ever heard of c-groups v2 How many of you have ever used v2 c-groups v2 I Think a lot of you are a lion up there So c-groups v2 C-groups was written or evolves over many years and Different people in different organizations different parts of the colonel basically just wrote C-groups and if you ever listen to Leonard Potter ring talk about he'd say that you know half of them Really don't work or we're kind of lying and people are lying on it but we rewrote we started rewriting it in the colonel many years ago probably two at least two years ago and C-groups v2 has been worked on and There's no distributions that have it on there's no distributions in the world that have it on And that reason for that is there's a battle going on there's a battle between Kubernetes and the container world and the colonel and system D and The reason for this is well the colonel and system D basically have said they won't take any more patches on the c-groups v1 because it's just broken architecture and So they're adding new lots of new features To implement c-groups and system D is fully embracing all of these and and Kubernetes world is all written to the c-groups v1 architecture. It's all written to the you know the way It was laid out and you have full control and c-groups v v1 so basically if we turn on say Distributions to the default to see groups v2 all of a sudden every container in the world won't work on your distribution So containers are important enough now that All the leading distributions would not think of turning it on and because of that You know Kubernetes and containers of sad fat dumb and happy and there's no reason really to evolve So we needed a carrot in a stick See my and I did not get permission to put this carrot up. Yeah, so I found it on the internet So carrot so the carrot right now is You know c-groups that actually work Or at least you know most of the groups work is designed another nice feature of c-groups v2 Is that we have delegation? So if you saw our any talks on podman rootless right now, there's no c-group support in that's because Non-privileged users are not allowed to manipulate c-groups because if I access the c-group file system I have full control So I have the ability to up and down c-groups of any other process on the system But if in c-groups v2 there's delegation So you could imagine, you know each one of your users will get some subsection of c-groups And then they could further delegate down to Say firefox and I can guarantee firefox won't use up all the cpu if anybody's ever dealt with firefox running off into Neverland and you know all of a sudden you say why is my machine performing horribly? There's also lots of new c-groups all new development on the kernel is adding new c-groups to v2 It's not a lot. You know the supposedly there's not any v1 stuff getting accepted So there's a lot of reasons for containers to take advantage of c-groups v2 But you know yet we still have stagnation So to my opinion we need to stick we need to basically Kick the c-groups world in the ass and say we're turning it on and fedora 31 So fedora 30 is is closing down Going to beta fairly soon at that point, but our 31 is going to kick off and I plan on opening a request to default raw hide version of fedora 2 C groups of course I always have the caveat that if Blows up in my face and we don't get it done We can always turn it off before 31 goes to beta But I think we need to have a distribution that says we want to run with c-groups v2 Containers finally fix the problem and it's not only containers that are going to be a problem There's lots of applications. You might even know some applications that read the way c-groups v1 is laid out So I know that Java the JRE right now looks at the layout of how many How much memory and maybe a few other fields to figure out how many threads it's going to generate so any code that is hard-coded to run c-groups v2 Our v1 is going to need to be ported to run on top of c-groups v2. So anyways, that's My little a shtick my little carrot and stick on c-groups v2 anybody questions It gives good idea. Yep Yeah, well that those things are all gonna have to be worked out. So device c-group I actually I Know there's other mechanisms ebb pf and other things like that that are around there's also a no-network device controller but these questions that you know Red had a couple weeks a couple a few months ago was basically deciding on what's gonna go on to rel8 and And rel8 there was a lot big push to turn on v2 c-groups in it And I you know we all pushed back on and said first of all V2 probably is not enterprise ready And when I mean enterprise ready it had no no distributions ever run it So we're gonna take an enterprise ready operating system and switch the control groups underneath it now I'm sure I've got my Facebook and and Leonard probably have tested You know they test well and I'm sure it works fairly well But I still would rather have a million machines running v2 before we try to say it's enterprise let ready my opinion of So we missed rel8 rel9 comes out in three years We don't turn it on a fedora this year We're not gonna make rel9 and rel9 then lasts for 12 15 years which puts us to you know the 2040 or something before you know I'll be in the grave by then but you know So it's gotta happen. We got to move forward These things have to be figured out and carrots aren't working