 Hello all y'all. My name is Neil and I'm with me is Dan Axelrod Daniel Axelrod And we're here to talk about how data loves fedora and what we do with it and the lessons that You know that fedora can learn from us about contributing and how we've learned to contribute as well So to start with this like let's talk a little bit about who we are So my I consider myself a little bit of a professional technologist I've been doing stuff with Linux for 15 years give or take I'm a contributor and developer In fedora open Sousa magia open man Driva got a little bit us here and there with Dabian and a boom-two And you'll probably find me like listed somewhere probably the few arch things or whatever like I I've lost track at this point and I'm a member of fesco as well as many fedora sigs and working groups and That's really been enabled by you know Datto who has employed me generously as a DevOps engineer, you know for whatever that kind of means, but the most important part is What I do tends to help me relate within fedora and they let me you know spend my time Helping with fedora to help Datto and so there you go and Dan went to go ahead and introduce yourself. I'm Daniel Axelrod The shortest way to sum up a lot of what I do is build platforms And I firmly believe that we get better technology by being epithetic I've been a Linux user for about 16 years I started on slackware and Quickly ran into Expecting that slackware 16 years ago would have a good package manager and didn't Which is how I eventually made my way to to other OS's That ended up making me kind of a package management nerd And in fact, I once wrote a terrible young clone. You will not hear anything about this terrible young clone in this talk It the details of it are best-buried I also a senior DevOps engineer at at Datto and I'm on this team Well, we already got our first comment he wants to know about your terrible young clone Maybe that'll be my flock talk All right, it was the uncalled of course. Oh, oh boy. Yeah So just a little bit about Datto We were founded in 2007 we got around 23 officers around the world, you know, we were 1800 strong growing We operate exclusively in what's known as the channel which means that we operate in a market where we sell to businesses who sell to other businesses and through that network of our Customer management, whatever we have 17,000 plus managed service provider partners that we work with to sell our products to sell the them to their customers and What we tend to offer is managed services for Managed service provider oriented IT solutions so that they can help support companies that may not necessarily have the wherewithal to Handle their own IT solutions. And so that that starts with the what we call unified continuity, which is our disaster recovery business continuity solutions that do backups and restores and stuff like that as well as networking the vials and can share stuff and then as well as service automation and business management solutions and machine management mass management platform solutions But you know, that's not what we're here for we're here for why fedora so You'll you might be surprised But the reason why fedora really comes down to those four pillars that make up fedora the 4s freedoms features first and friends and we'll start with our first pillar is like the Closer to upstream but stable platform that fedora is with the upstream first philosophy It leads to a great dynamic between fedora and some of the projects they ship But you're shipping the latest stuff that is tested and integrated well with things like fedora CI and open QA and Bodi and things like that and it provides an excellent solid foundation to build upon and it makes it so that it's a very trustable platform to build everything that we want to build and start from and As we go through that and we start using those bits for ourselves is The tooling that is built in fedora for supporting the distribution Supporting leveraging it for various components and things like that or even like enabling contributions and stuff is fantastic They have a nice separation of concerns across the different scopes It enables picking and integrating the bits that we've needed It's a lot of it is super adaptable for purposes that probably weren't conceived before Like internally we don't use Koji. We use a system called OBS open build service Which we'll talk about a little bit later and it's a nice echo of the UNIX philosophy I think in the way that people really intended it to be where you have tools that are fit for purpose that can Be juggled around it adapted and integrated well for whatever purpose you feel like and so this is really handy for us And it makes it easy for us to You know use and remix the tooling for what our purposes And we contribute back as as it makes sense and where things can be beneficial for fedora as well And the bias towards action this is actually one of the things that aligns really well for us in for it's a data core value that we we try to go forward and and We can do stuff and if it turns out to go bad they can be fixed so this aligns very well with the features pillar and this is Emphasized all over the place in all kinds of interesting ways like the lazy consensus model for changes like the way that Someone can come up with a feature Or a change and propose it and the community debates it and then it goes up for Fesco and then if there aren't strong objections to it it goes out and it gets implemented and this is how You know this lazy consensus model plus this continuous improvement through these changes Allows fedora to be Successful at delivering a best-in-class platform and for us That's awesome because it means that you guys aren't afraid to make you know a better thing and you're not afraid to make a splash for it and Everyone benefits from it. It's not just about fedora. It's you know, it supports You know distributions like open Suza Dabby in a boon to you know arch everyone and that's fantastic And it helps us as well because some of the things we'd like to do we look at it and we and we take it in even as We take it in or we even propose into and that helps us and helps everyone But of course the most important part is that the community is collaborative and very friendly the genuine desire to help everyone with The the interlocking bonds across the six the teams the working groups whatever and the default assumption of like Everyone is working to help each other. It's a positive intent and this extends out to Beyond fedora like as we as it goes to across other projects and things like that. This makes This makes it so much More fun and much easier for people to feel comfortable with bringing them their best self into Into the community into the project and that has that is something that we really enjoy from the data side Like it's super easy for people to come in meet other folks Try to work together and get something done Really well and so that has been fantastic And to kind of jump past that let's talk a little bit about like how data projects and products wind up using fedora So we use a wide range of technologies across the board. So I think the most obvious one is that we use fedora Linux itself We also use a lot of packages and back for it for our own things We use KVM and Libvert we use okd with fedora coro s for We obviously have with our some usage of centos and we're doing some work with centos stream Recently and we're also using fedora apple on top of that because you know why why use why use it without apple That's crazy talk and we contribute to all of these things where we can we also use space walk for For a machine management for a workstation management informant for our server management as well as so lio is The and we use lio, which is the linux i o Target, I think linux i o is what it's called. It's basically the ice scuzzy tools that are used for managing Targets and initiators and all those sorts of things and this is actually under the fedora umbrella and although it's not that well No, it is in fact part of it Yep, that's right linux ice scuzzy target Um, I don't know why it's called lio. I think it means linux i o but it's it's still weird anyway Point past that The starting point for this is like the data linux agent. This involves the centos stream So the data linux backup agent is actually part of our business continuity disaster recovery solutions It enables seamless backups of linux systems And it's it basically is built on with two components in my in place the data bd, which is an open source kernel module and data linux agent daemon, which is a proprietary user space daemon for interfacing with our with our appliances And this solution was introduced in 2015 to add support for linux systems And with that we have done over 300 releases since we since we started in five years ago and over 50 distribution releases across that time frame Currently we support slightly under half with the latest data linux agent versions We actually do in fact support All the fedora all currently supported fedora releases. We've actually supported fedora since fedora 20 Of course, we support red hat enterprise linux We've supported open susan and well even into the transition of open susan leap susan linux enterprise and of course debian and ubuntu lts, but like We actually do support fedora for this for this product and that has massively helped us and Furthering along those lines that when sento stream started we started integrating this into our process and as I said keeping up with the jones is it's enabled us to Build and test the data linux agent for the for rel and keep up with changes in the rel kernel And as things kind of start coming up every once in a while We have now the opportunity to contribute to fix problems We discover in rel as it progresses through sento stream before it hits our customers And this is a valuable thing For for both us and I think also for for sentos and rel Because it helps prevent issues down the line in upcoming rel releases And in some cases it makes it so that it enables other things that people I doubt thought were were possible before Like just straight up example off the top of my head When when sento stream first came out and sentos 8 was rolling out I discovered fairly early on that there was a problem that made it so that I couldn't build images I just went and fixed it. I sent a pull request. We got that integrated in I filed a corresponding bug to say hey red hat. Please pull this into rel They went and took care of it. And so that's all good And leaning on past that we obviously do quite a bit of back porting stuff because of course That's how it kind of has to work when we're when we're Rolling out stuff on the products, but we backport a lot of stuff from fedora, but not necessarily to Sentos or whatever. We're actually back porting fedora packages to our platform, which is currently Ubuntu based and Fedora packages are stable recent and tested which is very Very useful for us because it lets us have a a level of assurance About whether the software will actually work as we cherry pick it back into into our platform And so we use a tool called deb build Uh, which lets us use rpm spec files To take that as inputs to build debian packages We have actually we often do for almost everything. I think now we build for fedora and sent or sentos And ubuntu at the same time and verify that everything is still working. The behaviors are consistent And we maintain and develop ports of fedora macros For deb build to run so that we can do this sort of thing And it is massively improved our ability to get things rolling faster and to get our feature enablement going In a much more coherent and cohesive way and on time on times on times important And rolling into that like our package build stuff we've started looking into building things Leveraging modules because the whole modularity stuff is super interesting But of course, we don't use koji. We use the open build service, which is susan's version of koji It's designed to support a wide variety of links platforms such as red at fedora susan debian ubuntu They offer a hosted version as the open susan build service and that appliance image is freely available for use set up on your own And we have our own host. We actually self host our obs instance For our use and why we use it is the source input flexibility The easy scaling of resources to just we spit up workers and they work But the biggest is that it automatically handles chain builds and deals with all the dependency Issues to resolve and clean things up and get it so everything's linked correctly and works Um, it was straightforward to deploy and it lets us build the packages with the spec file for both debian and rpm distributions And so we've we've gone down that road and used it. Of course We did we did need to do work for modularity And so we worked with the obs team along with members of the d&f team And the fedora modularity team to have a strategy to support modules in obs the upstream obs project You know did some work over the course of the last year which led us to refocus on porting that to the stable obs release We we added support for that and that was pulled into obs 2 10 1 with our assistance and we started using that basically immediately to take advantage of modules at scale and we're we're actually really aggressively adopting modules to build out our solutions Internally and to kind of talk more about that. I'm going to hand this off to dan to talk about Our containerized web apps, which actually involves this whole scale of things that I was just talking about I should um so um One of the things I've been working on recently is uh some of our strategy around uh containers going forwards and um one of the One of the one of the parts of data software ecosystem that makes the most sense to containerize is Our web apps. We have a ton of different web apps Some of them are our front ends that people are interacting with some of them are services um and As they grow and gain and get more complex and have interdependencies Um containers give us a lot of benefits for them. So uh I'm going to be focusing uh right now on specifically our our php container stack. Um Datto heavily uses php. So a lot of our web applications are um php applications using the symphony web framework So modularity actually became an extremely useful feature when building out these containers And the reason is really that everyone wants a different stable everyone wants stable, but everyone has a different idea of what stable should be So on one hand you have application engineers. They care about um the language stack um The the php version um ends up being somewhat tied to the uh the application framework version So in order to use a newer version of symphony, you uh, you often need a newer version of php um and php has um You know with every new version they things get deprecated eventually phased out new features get added so um application engineers care about which php version they're using um and Because there are a lot of teams coordinating on these apps There there needs to be at least somewhat agreement on these are the following php versions We're going to support and then we need to move forward everybody needs to go through a process of moving forward Okay, great. Everybody agrees. We want php uh seven four cool. Let's start working to to go to seven four When I say everyone though that the application engineers because the infrastructure engineers care about os versioning And they have very different needs and different cadence um There are concerns about what os to use and what to upgrade it come down more to um About compatibility compatibility with deployment tools um interdependencies between um between different servers and also there are security concerns for example Um, oh for compliance reasons, uh everything needs to move to for example this new crypto algorithm That's only supported in libraries that are in this version of the os So we need to update um now, but then we hold off for a while um Like I said all everybody from both parties wants something stable They ideally like distro maintain software so that every time there's a um every every time there's a security fix or bug fix Uh, you don't need a team of people internally um managing uh exactly how that gets rolled out what it breaks Who gets it when? so what we've come down to for for our container stack is uh an lts os um with newer php and modules are the perfect way to implement that So right now we've picked uh ubi 8 and sent os 8 um as the uh as the um as the os base and then um People can pick the php version they want on top of that so um Some of what some of the containers enable uh php this php 72 module some of them enable the php 73 module um the 74 module will come out soon um, but this is this is absolutely amazing for uh for letting teams work at the pace they want to work Next slide please um So I'm not sure how many of you have dealt with php extensions before php extensions are needed code that are loaded uh that's loaded into the php runtime you you build an s o It it gets loaded in um php doesn't come with a lot of sophisticated uh tooling to um To kind of manage that so you end up having to do things like deal with um deal with dependencies of this extension of parts this extension and You also end up caring about the load order of the s o's in runtime that that ends up going into a configuration file and you actually need to care about This library has symbols that this other library needs so they need to be loaded in the following order um So all of this really really could use a package manager to uh to make it easier I say this because our original strategy was um cool let's use a similar strategy to uh to the community maintained php containers on docker hub and Have a source built php with source built extensions. We ended up with the all these hairy scripts around them until we realized dnf solves all of this So now we just dnf install the php extensions we want uh the the package manager takes care of dependencies the um The packaging itself takes care of configuring things in the same load order and everything gets composed well together um, what's neat about that is We can also get uh extensions from a bunch of different places. So, uh, some of our extensions come from the um the php Uh, the the php module itself is supplied by the os Uh, I actually didn't have a bullet point for this. Some of them come from uh apple and for extensions that aren't in either They're probably in fedora because lots of great stuff is in fedora Um, so we do a back port from fedora and then we have a an easily packaged extension. Um, what's really neat is that um We have things to the point and you wrote up some scripts so that you run a script and now we have a maintained back port Uh with the sources inspect from fedora, but built for uh for all the different modules we're using Next slide Okay, so we have a bunch of applications. There are a bunch of containers Uh, it sure would be great to have some software to run all of these containers and uh, and help you build them and help you manage them So, uh, this is where okd and fedora coro has come in. Um, so In production right now. We're using um version three of the, uh, okd kubernetes distribution Um, we're mainly using it for uh as a container registry and uh, to do our container builds Uh, it it gives us a really smooth experience for having fast reproducible builds of all our containers Uh, we have been looking at okd four. Um, which uh, in case anybody here hasn't heard Um, okd four has gone out of beta after uh after an extensive beta period and it's now ga go use it. It's awesome um We have a we have a proof of concept where we're um We're testing that out and testing out workflows around it It gives a really great developer experience um the uh the odio tool uh is a command line tool that um that works with okd to um Make it easier to have A really tight loop from when you edit code to when it's running in a container um in ways that are not just I'm interacting with podman on my local machine, but stuff is different in the cluster Um, and it's also got a really good admin console. It has uh, I think the best admin console of any kubernetes distribution for developer workflows, uh, that that we were able to find And underlying okd is is fedora coro s Um fedora coro s is the perfect admin experience for a bunch of container hosts because you don't need a configuration management tool to manage it um It's an image and then it upgrades itself to a new image and the cluster can manage it all and It's not your problem. It's just all taking care of for you Um, and you know that that all of your nodes look exactly the same underneath because they should Uh, so as as part of this, uh, data employees are members of the okd working group. Um, and um Uh, we're we're involved in in a lot of the conversations around What it would take to get um okd 4 to uh to ga and uh Next slide, please so, um It was where I take back over um, so You know you heard from dan talking about all of this container stuff and and workstation things the the sorry all this container stuff and and modularity and all that all those Good crunchy things for developers Um, I tend to live a little bit lower in the stack. So let's talk about workstation systems management because that that's a realm where I live in Um, we actually use spacewalk for wrangling our workstations so to speak So the reason we do this is because engineers can choose any distribution. Uh, well within reason like we support, um fedora We support sentos and we support open susalip and we support um Ubuntu lts those are the options that that they can choose from because and we've done work within spacewalk upstream to help support these platforms Um, and the and our philosophy here is that our compliance is through auditing rather than control Because we generally trust our our engineers to make smart decisions about what they need to do And we also trust them that they're making changes to their systems to help support their needs to Build our products and solutions and services uh, so from that perspective spacewalk works great for us and And that's primarily because It doesn't mandate that you have to have a control mechanism on the workstations For managing them and or for maintaining them even really so that's why we've kind of gone down that road um For our servers we do it with foreman and foreman obviously enforces this mass management and control mechanism And we do it that way as well. Um, so like servers are tightly controlled uh For virtualization restores, uh, I guess i'm gonna kind of hand this back off to dan because he he's good at talking about this sort of thing Sure, um, so our Our backup products, uh Deal with vm's in a couple of interesting ways and kvm and libvert are essential projects as as part of that So, uh, one of the ways in which our products can take uh backups of your machines Rather than having an agent running within the machine We if your machine is a vm We can talk to the hypervisor and use hypervisor native features to take backups And so we use libvert, uh as as a really nice abstraction layer Over a bunch of different hypervisors so that we can support uh kvm hyper v vmware um all as all as backup chargings um the other really cool thing that we do is um backups are no good unless you can restore them That's that's one of the rules of backups a backup you can't restore is uh, he's not going to help you And one of our options for restoring a backup is to restore it directly into a vm either in your vm infrastructure um or in our vm infrastructure on On our metal So we use kvm as the hypervisor for uh for running the uh the these vms um in in order to manage the complicated process of Here's a bunch of temporary vms that we don't control the contents of because they're everything from people's workstations to servers But we got to run them off And back to you Neil So I mean you've heard from where what we love about fedora and what we're doing with And how we're trying to you know be involved here to and why we like being involved um, so I just want to You know touch on a little bit about how to make participation in fedora even better than it already is because it's already pretty great There's definitely room for improvement It's easy to participate, but it could be way easier to figure out Fedora has a well-defined Contribution process model and tools that support drive buy and sustain contributions are very very much top tier A fan favorite internally is paggers remote pull requests We we internally mirror disk it into our system And whenever we're making changes that we want to actually contribute back to fedora We set up a branch and we push it back out to a public mirror that we can then just go ahead and go into pager and then just Type in the url in the branch and just submit it as a pull request This has literally we've literally gotten props like I've literally received feedback from other people in the company saying like this is fantastic And and I and like they don't know why other people don't have this and it's just it it turned into a super big fan favorite So like that's awesome. Um getting started however is super overwhelming for people um The wiki pages that are linked to from the what can I do for for fedora dot org website is awful We hack around this by Mentoring, you know other people within the company who are actually familiar with Contributing to fedora like but this doesn't scale very well One thing that could really really help is taking a hard look at those pages and Breaking up that flow and that information into more bite-sized chunks to make it a lot less scary and to make easier to to You know kind of walk people through the process of doing your first your first package Sending in your first patch, you know making your first edit or updating some docs Like right now like I look at the pages and at least one of them I think I I did the print to pdf and it was like 15 pages That's a lot of dense information in one page So that's something that would really help and you know just some making it easier to discover some of these You know features that help support contributions more easily would also help as well Like the the the remote pull request was a godsend and it's not exactly called out to anywhere But it's a very awesome thing. So like, you know, you're doing great Just just a little bit more to make it better So And that's kind of it for our I guess what we would call prepared remarks Um, if you want to check out more about us like we have our engineering blog at data.engineering if you want to join us Maybe we're hiring data.com slash careers and you can check out our public github.com and get lab.com Organizations and see what what we've got there. We've got a fair few open source projects as well as some other things there Um, you know questions comments We're here to to to lay it out with a comment Everybody here give yourself a hug because the fedora community is fantastic to work with how do People put them in the chat and then we answer them. I believe this is how this works. I mean we're uh, so we were muted I think just out of habit. We don't have to be muted. Oh Let's see ben cotton just gave us a question if you could change one thing in fedora to make it work better for data What would that be? Oh boy, then you don't ask the easy ones um One thing okay uh Let's see If I could change one thing just only one You know what i'm going to i'm going to punt this i'm going to ask danda to to do this um for me For me it was really the getting started problem of um I want to contribute a patch to a spec file um Wait, I can't just like Have have pager trust my keys and and push to a get remote. There's this whole like Auth process that opens up a web browser, but i'm on a headless machine And so I I know work has been done since then this is this experience was also probably about a year ago, but it was It was uh It was kind of frustrating to be already with a patch And then have that last little bit of how do I get the patch to where somebody can see it require Way more steps than um than i'm used to that said, um In irc people were super helpful of getting me through them and that goes back to some of that mentoring, but um The I believe also the docs have been improved since then Just so with dan just said reminded me of my one I could I want to change about fedora that could make it better Oh my goodness We gotta stop making people have to do all of the rebuild work by hand for themselves My goodness like if I if somebody updates a library, they shouldn't have to search for all of it themselves Figure out whether they can do this push it forward Oh and make sure they have permissions to update everything like that is The worst thing that I have ever experienced and that is the one thing that if I could weigh the magic wand That is what I would fix like I would make that entire problem Just go away because that is how much I don't like that And josh boyer asked like what are my our biggest pain points? That's one of them um, I think the other pain point that I one of the other pain points I would say is there is a lot of stuff that is in a half finished state that There is no there's no clear way for people to kind of get involved to To help push things over the finish line Like there's a lot of little initiatives here and there but they're all kind of closed off and it makes it difficult for for For someone like me to jump in and say let's let's push this over the finish line. Let's fix this Let's contribute. Let's make this like this better. There's a lot of little things like that um The dynamics among uh the the other the other pain point I would say is that some of the the dynamics with um, where am I going with this uh with how we do The builds to test to release While it's all happening, it's not very observable and that makes things a little bit difficult for me to feel like I have the I have actionable feedback to be able to work from um, like I recently just got bitten by something like this just not even a week ago and that's sort of a thing that's That I really wish Was less of a problem like we do a lot of fantastic testing and things across the the project but But it's not observable like How would you find out that oh when you kick when a kochi build completes? It runs through a bunch of tests and those are those are stored somewhere. You don't You especially don't find out until it until you submit an update in bodhi And at which point it's too late for you to act on that feedback because it's already permanent and stuck things like that um Those are pain points to me because it makes it difficult for me to feel like I can iterate quickly and I can do things effectively Um, so that that like from the pure like I am mechanically working through stuff point of view. Those are my pain points um Dan do you do you have any others in particular? Like I know you you tend to focus a little bit more on like the communications and interplay kind of things So maybe you have some feedback here I I actually can't think of anything else on the top of my head. No but so then uh I guess that part's fine That are going on right now in terms of like the um rpm autospec stuff where it's doing the release and changelog stuff Automatically is good. I mean the coche stuff is really cool Um, but I really wish coche pushed to production immediately automatically But I think the rpm autospec plus coche might actually help this Um, I really do like some of the concepts of mbs. I don't understand why it's a separate system I really don't get that Like it some of the functionality in there is useful for even non modular builds. It should actually be part of It should actually be part of coche. Um There's things like that which makes me a little bit like somebody it needs to like Holistically think about the experience. I think that's That's what this comes down to like all of this comes down to like while there's a lot of great things here Nobody has like thought about the holistic Contributor experience in such a way that makes it much more streamlined and much more Effective at helping people get work done um Yeah, uh coche does not make prs gens. It actually just it'll rebuild off of already pushed commits um Yeah, so Uh, is there any other questions in here? um So is there any other questions because at the moment I just see people talking to each other in the chat Which is which is great. It's good. It's good discussion. Yeah, it's good stuff Some of the six. Oh, yeah, yep Yeah, I mean there's some mitigating things that helps help this set this up Like you have sags that have ownership and you have proven packages and stuff like that But I'd like to move towards a world where Proven packages don't need to exercise their abilities as much To do the things that need to be done anyway By even normal people like it's a barrier to entry and if somebody says hey I want to update this library to this new version that yes It bumps the surname but it brings like xyz awesome new features and to be super useful for everyone else to be able to start from It's so hard Um, what's oh ben's got a good one. What's the sent us stream experience been like for you so far? What could make it better? Sent off stream is weird So I'm going I give it a lot of passes Because it's so new and it's and and like the sent us project is not a his Historically not been a real community in the sense that there is not people being actively participating in What it does. This is a relatively new thing for them That being said there are two things that really grind my gears about sent us stream The first is that pull requests are pointless They are absolutely completely 100 useless Um, if you make them they don't go anywhere They just kind of sit there and eventually they go to a bugzilla which you need to independently track And then they might get merged, but you don't know until some nebulous push at some point happens Um, I understand why that's happening again. This is all new. It's very hard to set all this stuff up from zero um It is an unpleasant experience if you don't have any background What could definitely make it better is do more like what fedora does make it so that You know, you have the same contribution model. You make it easier for the drive-by changes and People react and give feedback quickly like if I make a pull request to a package on fedora About six times out of ten. I'm going to get a response in the form of a comment or it's going to be merged within three days um I don't think that's ever been anywhere close to the case for the few pull requests. I've done for sento stream um, and I don't like that and I and Like I want to be able to do the same workflow I do for fedora for sentos I want to be able to do remote pull requests That because of packages that I've mirrored and done all the stuff I want to be able to send those I want those patches to get feedback response for the people maintaining those things I want to be able to react and respond to them Relatively quickly so that I still have the context in my head all of those things I think would make it better So if I want to say what's the one thing that can make sentos stream better Do like fedora does that like that that's fedora has a fantastic Wellspring of historical expertise and knowledge and infrastructure and tooling Reuse that it's very useful stuff It's why we like fedora and I want to also like sento stream in the same way sento stream is valuable I love using it and I and we do and take advantage of it But this is where it could be better and all um I'll make this point If if you're a corporate software engineer with um with internal deadlines Having having the having the like feedback On your on your contributions within a few days Is a great way To convince your management that you should be doing work in the open and in the upstream uh the longer it takes for you to Get feedback get collaboration The easier it is going to be for everyone to say Well, why don't you just give up on them make your own internal fork and then upstream it later? And then it's harder and harder to upstream it later. So the um the the more collaboration you get Especially from corporations comes from the being able to iterate back. Uh, so another good Let's rip Barrio, I'm sorry if I butchered your last name I'm not great with these things. Uh, if you you asked if you we introduced modularity to obs Are you the only users or of the feature out there? How many people on your side? We're involved in maintaining that instance and pushing things upstream. Okay, so, um I maybe I misphrase this but like the initial code development for the feature in the open build service was actually done by the obs team after getting feedback from us and the dnf team and the fedora modularity teams where we Basically figured out a plan of how it was going to get done once it was actually like once the initial code had been written um Myself and dan worked on figuring out the pulling that code testing it and validating it internally for obs 210 And we Backported that and that got merged and released and then we started using it then. Um as far as the only ones Who is using it? Um I think we're the only ones I know of I mostly because I don't think the feature has been talked about too much Um, it it's a very valuable. It's a new feature But it's also listed technically as experimental by the obs team for a couple of reasons one The format for module mds and repos is unstable We do not have final definition of all the behaviors that is supposed to exist and two The module support in obs is very weak It is at this point only capable of consuming modules. It cannot produce them. This is something that's a follow-up effort that um that we will that my team is probably going to start looking into as we start Revving up more usage of modularity. Um, it's certainly something that we have been discussing off and on for I mean at least a couple of months now um Um, how many people are maintaining that obs instance? It's mostly it's my team maintains it It's a team of five people Um, I am the primary maintainer of it with dan as my secondary um, but there are a couple of others like um Another one who's on my team who's actually also participating in fedora a little bit. Um, dalton minor Is also a thing. Uh, also, sorry not also things also participating And maintain in maintaining the system and helping us support these workflows. Uh So like we do have a number of people working on this Producing modules is not our top priority right now because we're right now focused on the container stuff and we can We can skip a whole bunch of things When with the way that we're doing the container stuff But it is something that we are looking into because as we start expanding our usage of the modularity technology more This will become more important for us to solve um, and at some point it may be It may be vital for our partner To to have agents that um To have agents that are that are part of streams depending on how their their dependency trees work out right like right now We've been very lucky. We've managed to to avoid having a dependency on a On a stream at runtime or at least a non-default stream at runtime once that changes We really have to deal with that and and hopefully that isn't gonna be for a while But that's certainly it's at the back of our mind and it's certainly something we're thinking about for doing this um jakeb schick Sorry if I screwed up your name Mentioned that uh, there's some single purpose tools that they're working on for copper to Support building modules and it might be useful for obs. Yeah, uh, actually funnily enough I think daniel mock mentioned it to me this morning Uh, and I was like, yeah, this is definitely going into my bookmarks to look into Um, and I think dan and I will definitely take a look at it and seeing about this There's some definitely some ideas in there that I think we could reuse and we'll most likely wind up having an obs to module Tool to handle mapping an obs project to an a yum repository with with modularity data. Um, yeah So, uh any other questions or comments Uh, also, thank you for coming josh. Uh, I I didn't expect that um And and thanks for all the great questions so far um, I I won't deanomalize them here But I saw at least one alum from our team in the audience and it's it's awesome to see you Yeah, no, it was great I I'm glad to see like one of our one of our form our alums actually kind of showed up for our talk. It's great Our spot was until uh 50 after right? Yes, technically. Yes, although I think almost everyone's been running over slightly But we don't have to run over. I think if everyone's If everyone's all said and done then I guess we can could kind of end it here Um, if anyone if no one's got anything else, I mean Thank y'all for coming Thank you to listen for listening to us, you know talk about our love for fedora Um, y'all are great keep up the great work and it was our pleasure for sponsoring nest