 Okay, so hi everybody. I'm Timothée Rivier, and I'm from the chorus team at Red Hat I'm sharing query, and I'm from Red Hat as well in the customer focus team Hello, I'm Alessandro from the multi-h team in Red Hat And hello, I'm Christian. I'm on the okd streams team in the open shift arc at Red Hat All right, so we're here to talk about how we're building kubernetes distributions and Doing that the cloud native way using okd using Pipelines using tecton pipelines so What's first this up with what's okd for those of you who don't know it's a sister distribution of open shift which is a kubernetes distribution and We're not just bundling kubernetes We're bringing a lot of things on top So one of the main thing that we bring with okd is that we bring the operating system as well So okd is based on federal chorus And so when you set up an okd cluster you set up everything at the same time set up the system You set up kubernetes. You set up the applications on top the operators all Very nice things to make developing on top of kubernetes much easier And you manage the thing as a single entity and you update your closer everything at the same time You update the system as well when you update your closer. So this is a very nice experience of a managed System career in communities distribution and that's the main thing about okd. So okd is a community project it's as always here as everything here is open source and and So far it's been based on federal careers. So the core the core is an essentially of okd Was the operating system was federal chorus, which is an extra fish distribution. It's based on federal It's an official federal variant and the main thing behind federal chorus is that we have automatic updates So we try and bring new content updates fixes security fixes and new features all the time as they get in Inside federal and you get them the the basis between the beneath the Benefits underneath federal chorus is that you're using an immutable infrastructure So you are doing provisioning via ignition you write a config you want what you want to have on your system And you form the first boot you get a system provisioning configured as you like with the thing with your containers and You can configuration your containers And that's we've been using in okd Federal chorus itself is available on a lot of platforms a lot of architectures a lot of cloud platforms You get a list of here. I'm not going to repeat all of those We have almost support for four architectures right now. So he's 86 of course a or 64 s3 90x and poor PC coming real soon So let's take a quick step back and look at the enterprise Linux Equal system. So if you look at distribution that we have we have federal which is upstream which is changing rapidly which has a lot of new features getting in new changes Experiments sometimes think that may or may not work All those kind of things and this where the community experiments and do things It's where we learn new code with a new software will a new features and that's federal Then we get now some two things into center stream Which is a shoot space which is where we want to try and define where the next version of Enterprise software enterprise Linux is going to go. Which feature do we actually want to have there? Which changes et cetera and then finally you've got like the radar product We're going to price Linux and all the other variants Which is product. So it's something made by radar and and and so and So but what about chorus then where it does all of this where it does all of our federal Accuracy where our course addition fit here in the picture and so so far we had been we had only two We had federal chorus which is based on federal as it said We had the real chorus which is based on real which is part of open shift And we are now introducing a third one, which is center a stream chorus, which is the version in the middle So it's keeping the same idea keeping the idea of having a minimal operating systems with just what's needed the container stacks at the the what's needing for Kubernetes and putting that on top of center a stream so the This is the ecosystem looks like That right now, which is like we get federal chorus the the version Specifically for running containers on top of federal the one sent to a stream chorus Which is based on center a stream and the final rate enterprise Linux corpus for open shift so okd on F cost has existed for thank you a couple of years now and Basically fedora corpus is two to three years ahead of Frell corpus and that leads to some situations sometimes where Features land in F cost and they make some okd components break and the maintainers of those Okd and open shift components don't really have the bandwidth to look at it. Basically, that's really sad So if we look at okd on escos, it's a win-win situation because our escos is going to be about six months ahead of rail Corpus and so basically the components team Really benefit from running their components on escos because they get a really early signal of what's going to happen when they land the new rail corpus for open shift and we are living in really interesting times for this because Infrastructure changes are and features are coming really very quickly and we want that as Early as possible in the software so that means that we need to deliver the OS now faster This wasn't the case a few years ago If you think about the last version the open shift version 413 we've delivered this Nearly in the same time frame as rail 9 and that was the first time we're still learning and that's where this team okd streams comes in We here and other colleagues are not from the same team We we just gather together to make this okd streams Our goal is not to build okd Our goal is to build the tools that would be needed by the community to build okd And that means okd from the grounds up so Starting from the coro s any flavor of coro s whether it's sent us or something else you want to test To the okd components release and later operators that run on okd Why not also go crazy and test changing some component of okd by something of your own to experiment So when we started on this journey of building these tools We had an easy option which was to use prow CI I don't need to introduce prow CI to you. It's beyond Needing any pitches It's developer centric and the the problem with it was that the prow CI specific that we use for building okd on F cost is not accessible to the community and That's why we chose to Switch to tecton tecton has Firstly is cloud native and secondly it has a very very powerful and active community around it. You just can see it just by looking at the Quantity of tasks that are ready for you to use in the tecton hop and also since it's a cloud native all of these resources that you usually use like secrets volumes Config maps they are ready for you to use and they come with really low Learning curve to us And that's why we switched to it So here I'm going to introduce two of the pipelines that we have The first one here that we're seeing is the one that builds the core OS It can run on any Kubernetes distribution So including kind if you want to all you need to do is clone the repo that you have on on the bottom there Use customized to apply everything to the cluster install tecton Controller and you have everything ready to build core OS Most of the tasks that you see here are based on a Container that we call the koza container the core OS assembler container It's basically just a toolkit around building core OS So it comes with everything that you need any wraparound around our PMOS tree building testing Building extensions so we build extensions to the live ISO to bare metal to open stack and That's available for you to use on S3 The next pipeline that we see here is the pipeline that we used to release okd Before we get into that Today so far the okd components are still built in the proud system But Alessandro is is actively working with the other colleagues as well to deliver okd payload Pipeline so you'll soon be able to build those components also in tecton and so this this Pipeline is fairly easy. It's it just queries the release controller on the proud CI cluster to get the tag and digest of a valid and verified release and Basically signs the release mirrors the components generates all the release notes and stuff that we need and Updates the channels so that you can as well upgrade your clusters and Next I'm gonna hand it off to Alessandro Okay, so what Sherin introduced so far are the pipelines that we used to build Centro stream core OS the base OS we are working on the pipeline to build a payload itself and Well, what triggers the pipelines today? We are still using tecton as a base that and tecton Provides an extension controller With a few custom resources under the group triggers the tecton or dev for which you can be able to essentially Define in an even basic fashion how to trigger your pipelines The main resources are the even listeners and the triggers the even listeners will essentially expose an HTTP handler where the Even even producer can send requests this with a payload and the information about what the event is and how it is made Of and triggers instead defines what we want to do based on that evens They are made of three other resources which are trigger bindings triggered in plates and interceptors trigger bindings allows you to map the sensory data from the previous payload to the Pipeline run that we are going to create Trigger templates defines the pipeline runs the task runs the tecton object that you want to create and the interceptors are I'm more active way for defining ways to either map or filter Whether to run or not your pipeline runs Events can be whatever they can be even from our data repository They can be even like periodics what we currently do is to run since a periodic job for building the base OS and what we want to go to in the next steps is To leverage these triggers in order to our shop support multi-arch images that To the multi-arch images for sent to stream chorus. What's the problem there? Actually, what you can do now is to run the pipelines within any cluster architecture There is no specific art no specific No architectures specific bindings that would avoid avoid you from running the pipeline into another architecture Let's say at least for AMD 64 and R64 at first The escos manifest in the op-shift to S repository from which we get the manifest the configuration for sent to steam chorus have all them all all if any possible architecture specific information that you need to Build for that specific architecture and what you get from the pipelines are separate cloud boot images One for each architecture essentially and for cloud provider or high source for bare metal and separate container native images Which are single-arch container images what we want to do is to have a unique manifest list container native image and We want to achieve this by using triggers because tecton does not provide any way to add node selectors node affinities And select which nodes you should use in your pipeline to run your task run You can do that only at the level of pipeline runs for task runs, which are the instances of the abstract pipeline that Sharon described so far And so the simple way is to just run two two times Let's say for two architectures the pipeline from the chrome trigger that I was talking before But at that time you continuously get single-arch container images that you want to compose into a manifest list we want to leverage intercept an interceptor and another Even listener essentially by feeding a config map each time Build is successful for a given architecture storing that is into an array or Let's say into an array and Trigger the even listener so that the interceptor Defined within it can understand whether it's time to build and compose the manifest list when it's time. So when all the architecture image the single architecture images have been built they compose manifest list pipeline will trigger and Compose the manifest list from the single-arch ones Still talking about next steps. We are working on having a of on introducing the fedora gores layering approach into a center stream careers because as Of now what you get when you? Get center stream careers nine is a no KD specific center stream careers nine that As packages and configurations Within it which are specific to To talk eddy What we want to go to is our model for which we become Say first layer providers of us sent of the center stream careers base images and we also consume at the OCD level. Let's say This base images by using By extending extending them through the container file like the one that you see in the bottom of this slide and adding layers packages configuration any thing which is specific to OCD and to be then published with the OCD release Pipeline that we were discussing before This means that we get to a model for which we have this base image and any second tire provider can consume it and Do whatever they want with it with you can essentially leverage any of the services offered by our PMO ST with this robustness even by the centOS Strain repositories and deliver them to the users in using the container registers as transport mechanism we are as she was saying before we Are doing all of this work to be cloud native to be Kubernetes native And if you want to try out our pipelines you can run them locally into any kind of Kubernetes clusters For example on kind what we do usually is to a production grade level is to run it into Post provided by the MOC Alliance in a collaboration that Kristen will now introduce Yeah, so we have our built farm on the MOC Alliance, which is the mass open cloud or Massachusetts open cloud Alliance mass open dot cloud is their website and We have started a Essentially a little joint venture a collaboration with them from the okd working group the centOS stream cloud SIG or the centOS cloud SIG The okd streams team within OpenShift and then the MOC Alliance and the MOC Alliance is a research focus cloud and educational focus cloud So what they essentially provide is infrastructure for their students that it's it's a group of university that and and other research related projects that make up this alliance and They use it for a whole lot of things and their main thing is providing the infrastructure for their students They're their folks to to run things on experiments or anything really and they essentially approached us needing The ability to spin up okd clusters on their infrastructure on demand so their students could Spin up a cluster very quickly and we've been working on on Enabling this. Oh, I hope you've been hearing me. We've been working on enabling this and really what well they they were donated, I think about 2,000 service and they racked them up somewhere and Turned on the power and gave us an IP range essentially To to go crazy and enable this thing and under the hood what what they did is they put a an open stack like API It's called ESI elastic secure infrastructure on top of their bare metal pools to manage those bare metal Machines so what we've ended up is essentially implementing a new platform for for the OpenShift installer we are actually aiming at Using the agent-based installer to do this which is kind of the new way of of installing OpenShift on on really any kind of platform and specifically on on bare metal platforms So they provide us with the bare metal infrastructure with an on-demand API that we can provision notes with and then It's essentially our task to run the okd installer and spin up a cluster and We've been not only have we enabled this platform We have also been given a built farm cluster to use ourselves. So essentially we are now a homed at On the mass open cloud Our cluster was actually down last week for maintenance and it was brand brand new So we have we haven't configured it properly with with everything, but I think we're almost there again So build infrastructure is almost up again And we'll essentially use that as our official build farm for the CentOS Dream CoreOS artifacts, which is the OS tree native container image Just the OS tree shipped as a container image includes the kernel and everything, but you can actually manipulate it And then change it like like a container and we just saw that earlier on this slide The core OS layering so you import the OS tree with a from Directive here, and then you run our PMOS tree within the container And that spits out another OS tree native container image, which you can also run as a container or you can actually Tell our PMOS tree to rebase your whole operating system to that OS tree that is was shipped within the container image So lots of interesting stuff. We are now part of the MOC Alliance with with CentOS stream and okd and Really not only our build farm is going to be there There's also going to be a secondary build farm or cluster for okd working group members So anyone who's participating in the okd working group and has some experiments to run They'll be able to request a namespace on that secondary build farm cluster and then even more importantly the ephemeral clusters Which is end-to-end testing will be able to run end-to-end tests on that platform And also obviously since they're ephemerally are available on demand. That's also what what they're the MOC University students will use spinning up a cluster and then Having it run as far as long as they need it for their project and then tearing it down again so to give you An entire picture of the whole cake. It's all just pieces of cake here Multiple layers so the base layer the infrastructure is Massachusetts open cloud and open stack It's not a full open stack thing. It's an open stack API that actually manages bare metal nodes Then we use okd's agent-based installer to set up clusters on top of those clusters and Especially for our build farms. We we use git stack ops with our go CD and that runs our tecton pipelines and Yeah, that's the that's what we run. That's essentially our Product are yeah, what we own as the okd streams team. We make the okd pipelines That produces the okd artifacts. We don't necessarily own the operating systems Therein we we just own the build system and we want to really enable The a kind of self service it must be super easy for anyone to replicate a build or to To change something up replace an rpm or add additional rpms and run an rpms tree compose yourself And this is with coro as assembler and this is what our pipeline does so you can create your own rpms tree based operating system yourself or Run a created derivation of fedora coro as a center stream coro as or even if you're paying redhead customer rail coro as So it's it's it's a really flexible and powerful tool and tool kit really and we're looking forward to you using it giving us feedback and It's very yeah telling us what what's not great yet What needs to change and also obviously we'd like to hear what works well what you like so This is essentially the call for action If you're a developer you might want to Try okd on top of escos and to stream coro as because it's more stable Than than the fedora variant just because as she mentioned the kernel isn't so far ahead as as the fedora one For staging if you already have an open shift cluster on your in your company And do you want to kind of run a preview of what's gonna come to open shift in six months or so? You can use a center stream cluster for that and then obviously for labs for any kind of experiments The students educational projects open source community projects This is what we want to see Use okd really So with that I think We're through and we can move to questions. So thank you very much any questions. Not so shy There's a question Yeah, I think there are probably multiple oh Yeah, what why did we choose to go with a bare metal based infrastructure to set this up instead of a public cloud? And I think the reason is that we just have a really good connection to the moc lines They need us to implement this and we need them for the free infrastructure, right? There's no they don't Yeah, they don't charge us for using this. We it's a Pro quo essentially and we just like if there's a public cloud operator who would give us those resources and say look You don't have to pay for it and just do the work. We'd probably do that too, but this was just the first kind of Joint project that we did in this way and since it's since we're from the Community side of open shift and moc alliance is is an educational research project That was just a perfect fit Yeah, so I go CD. Oh, yeah Question was why do it would doubt? Why do we have tecton on top of Argo CD? Why isn't it the same layer? Is that right? So essentially tecton the Argo CD GitOps? Which is added as a github app to the repository it watches the contents of our GitOps repository and applies anything it can be a tecton resource or any other kind of resources on the on the cluster It's supposed to land on so really the the Argo CD is a more agnostic controller for for resources And it get Opsie way So it's not just but we do control the tecton pipelines pipeline runs through Argo and in that way You can in oh the question is can I use this to deploy microservices? My own microservices right and well the question. Yes, you will be in the future We are as I mentioned still ramping up our Our build farms and and the community cluster on MOC But once we have the MOC community cluster up, you'll be able as a OCD working group member or an interested party who Presents themselves to the OCD working group says hello there You'll be able to request the namespace and then use that to run your experiments. There's we don't have any any rules yet on how long these things can run and how Compute heavy they they can be I guess if it's abused at some point, we'll you know We'll kick people out again But we don't have any tenants yet and we're looking forward to our first tenant and that's essentially if they have something they want to Prove out make a proof of concept or run an experiment the community built cluster would be the place. Yes How many architectures are we building currently? We only built x86 because we don't have access to arm builders. We don't do virtualized builds. So it's all Native builds and we don't have access to arm Machines at the moment and That is something we we've been wanting to do for for a while now Add multi-arch builds at arm as a second architecture It's hopefully coming soon But we still need to find an arm builder to actually run those Yeah, it we could definitely do that. It's just we have essentially decided not to pursue virtualized builds since all of our productized downstream builds are also native builds and we just Don't want to do something that Essentially, we can't use downstream so all of this is also meant as Inspiration for for our colleagues within the OpenShift org to show them that you can use these tools to build To build OpenShift or OKD instead of the ones we we have internally. It's not meant to replace anything But it's meant to kind of you know show folks to yes, we can change the build system and then Yeah, see progress there move forward and with the tecton tasks. It's just One reason why we chose tecton is because it's so easy to share these tasks. There's Tecton bundles or catalogs that you can reference and so the the actual task YAML doesn't have to live in a local Repository you can just reference it and the actual task definition or pipeline definition can live somewhere else So that's really really flexible any other questions All right, I think Thank you again for having us