 Hello everybody. It is a pleasure to be here speaking to you today. We're going to be talking about PHP in containers, and the interesting way in which Fedora packaging and infrastructure actually makes that be a whole lot smoother than it would be otherwise. So it's an overview. We're going to talk about who we are. So you know who's blathering at you. We're going to talk a little bit about data as PHP base containers and what's a base container and why we're doing this about packaging native extensions for PHP and some future directions that we're going to take the processes that we're showing you today. Okay, so who are we? Neil, why don't you start? Sure. Yeah, so my name is Neil Gampa. I'd like to call myself a professional technologist. I've been a Linux user for over a decade and a half at this point. I'm a contributor and developer in Fedora, OpenSusa, Magia, OpenMan, Driva and so on. I'm a member of Fesco Fedora Engineering Steering Committee for those who don't know. I'm a member of many special interest groups and working groups, notably like the cloud and workstation working groups, as well as the KDE SIG and several others. And for my day job, I'm a senior DevOps engineer at Datto and my primary focus is around software release engineering with packaging and containers and stuff where I manage our pipeline with the with, you know, RPMs, Debs and all the other things that we do. And I'm Daniel Axelrod. You can kind of sum up my career with, I build platforms and I strongly believe in bringing empathy to technology. Fundamentally, we build technology for people. We build technology to empower people. And if you don't understand the people using the technology you're building, you're not building what you should be. I've also been using Linux for a while. I'm a package management nerd and I am also a DevOps engineer at Datto. Okay, so this Datto place, what is Datto? So managed service providers, MSPs are companies that provide IT services to other companies that maybe aren't big enough to have dedicated IT people. And Datto provides technology and services to enable MSPs to do what they do. So we've been around for a while now. We keep growing. We're all over the world. And we have a variety of products with backup and disaster recovery, networking, remote monitoring and management and in all kinds of other areas as well. So that's who we are. That's what we do. We make software for people who keep companies running. And today, we're going to talk to you about our PHP-based containers and why they are less straightforward than one might hope. So, okay, what is a base container? So the containers we're talking about here are OCI containers, the kind that you run with Docker or Podman or maybe in Kubernetes. And when you build a container, if you're doing it with a Docker file, you start with a from line and you start by building on top of some other container unless you start from scratch, but ignore the base case. In every other case, you start with somebody else's container that they've made and you build on it. And so part of the way that DATO empowers its engineers to write applications in containers is to give them a set of solid base containers that they can build on top of. So that they get things like secure configuration defaults so that we have some alignment on what versions of language stacks everybody is using basically to keep lots of dependencies managed, making sure they come from maintained upstreams and to have less than individual application engineers need to worry about every time they write a Docker file for their app. So we've chosen Red Hat's universal base image as the starting point that we build our base containers from. So this is basically a spin of RHEL that is intended specifically for containers. It ends up being a subset of the total set of packages in Red Hat and there's a bunch of niceties that optimize things for being in a container. We support for the PHP language specifically several versions of PHP simultaneously and we need to do this because PHP kind of makes gradual breaking changes every few years. The idea is that it's not a break the world's completely different language and everybody has to change all at once but every time you upgrade your version some minor things break and you need to fix them. So we end up needing to support several versions at once so that people have a chance to gradually transition rather than we're slamming all your apps forward to a new PHP version. So in PHP the language itself has packages of reusable code and those fall into two main categories. The first is pure PHP. So these are packages that contain PHP code. This is code that's loaded into the interpreter just like something you would write if you're writing PHP. And then the other category is native extensions. So these are packages that provide code usually in C or C++ could be several other languages that ultimately get compiled into a shared library and that gets loaded into the interpreter at runtime and interacts with the symbols in the interpreter. So we're going to be focusing on native extensions here because the story around anything that's pure PHP is actually already pretty good in the language ecosystem. So let's pretend that we don't have a package manager. What will we do to install an native extension? So it's this process. So we start with getting the sources. Sometimes the sources are actually rendered with the interpreter. So there are extensions that you don't necessarily get by default if you build a PHP interpreter. But the code is there. You can optionally build them and load them in. They might come from a packages, which is the most popular language repository for PHP code. They could come from Peckle, which is the more old-school repository specifically for native extensions. Sometimes they're just on somebody's Pagger or GitHub or GitLab repo and they're not in any sort of package repository. Okay, so you have the sources. Now you have to deal with build time dependencies and you recurse into a version of this timeline for each one of those because they're going to have build time dependencies and they're going to have to go through the rest of this building and installing process. Then you actually build the shared library. So you're invoking a compiler. You're usually doing that through a somewhat standardized set of build scripts that PHP uses. And then you get to do the same recursive thing with runtime dependencies because maybe it needs to load stuff at runtime that you didn't need to build it, but you still got to have all those. And finally, you install. So the straightforward part of installing is you stick the shared library in the right place on disk. The interpreter looks for that, loads it fine. The weird part with PHP is that PHP includes configuration files that tell it about each of the shared libraries it should load and the order in those matters. So you specifically have to deal with things like library A depends on library B. So I need to tell PHP, the PHP interpreter to load library B before it loads library A so that all of the symbols resolve correctly. This is challenging. And it basically involves a traversal of the entire dependency graph. Okay, so traversing dependency graphs, finding sources, building them, hearing out their dependencies. That sounds like a job for a package manager. So sometimes the language package managers for PHP handle all this really well, but there are gaps. Sometimes the sources, so in the case where sources are bundled with PHP but not yet built and enabled, they can't really, they don't have good hooks for enabling those. And this kind of depends on your OS vendor who you're getting your PHP interpreter from because they've made decisions about what's in that package, what's in other packages, et cetera. The other really major hole is there's often OS level libraries that are dependencies of the native extensions you're using. So you might need Lib YAML to use the PHP YAML extension for example. And because the language package manager is mostly OS independent, it doesn't know the details of how to get Lib YAML on your particular OS. There are a couple of other weird pieces, but the kind of obvious solution that you're probably thinking of if you've gotten this far in the slides and see the giant DNF logo in the corner is use the OS package manager. And DNF is the package manager you get with the universal base image just like other enterprise Linux's or Fedora. And it turns out it does a really good job of handling all of these including that weird ordering in the config files of how to get stuff loaded. So obviously the defaults for your OS package manager are repositories from your OS. And so we're talking about universal base image repositories which are subset of Red Hat repositories. And they have some native extensions there, but not a whole lot. And I understand why because if they had every PHP native extension, there would be hundreds, probably thousands of extensions that they would then have to support for all of the PHP versions they support for a particular Enterprise Linux release. And they'd have to support them for as long as all of those different PHP interpreter versions are supported and for Enterprise Linux, that's a long time. So they've chosen a reasonable subset, but we often need stuff outside of that. Okay, so where do you go when using Enterprise Linux and you want stuff that isn't in Enterprise Linux? You go to a fantastic Fedora project called Apple. And Apple 7 had a bunch of PHP native extensions packaged in it. Apple 8 unfortunately does not because Red Hat uses modularity to handle PHP versions. So anytime you get the PHP interpreter, it's from a module stream and Apple doesn't yet have the infrastructure for building things that depend on those. Okay, so we can't get them from Red Hat, we can't get them from Apple. But there are all these that are already packaged in Fedora and it turns out those packages can still be useful to us even though they're not built for quite the Red OS. So how do we actually package these things? Yeah, so this is where I take over here because this is sort of the part where I kind of made this a reality for people. So at Datto we use a software solution for building packages called the Open Build Service and the Open Build Service was created by the folks at SUSE to build and manage the open SUSE and SUSE Linux distributions. It's similar to Koji, which is the build system that y'all will be familiar with as the RAL and Fedora build system, but unlike Koji, it was designed from the beginning to support a wide variety of Linux-based platforms and out of the box supports building packages, repositories and images for Red Hat Fedora, SUSE and Debianabud two systems. SUSE offers a hosted version as the Open SUSE Build Service but we use the freely available appliance image to support our self-hosted one that we use internally for our stuff. Next please. So kind of a little quick brief on why we use the Open Build Service. So because we are consuming software from a wide variety of sources, we like the source input flexibility through source services that allow us to automate and script retrieval and pre-processing and post-processing of sources to actually do builds. The OBS worker setup allow is designed so that we can auto-scale by simply spinning up instances of the machines and they'll connect to the orchestrator and connect to them and then that will add capacity and when we don't need them anymore we can turn off the workers and they automatically scale back down. But the biggest reason why we really love using OBS is because we don't have to actually think about doing rebuilds when dependencies change because it automatically traces through the reverse dependency map and triggers rebuilds as things change. So this is also not just stuff that's in our OBS instance but stuff that comes from the repo. So for example, when Fedora updates a library because in, you know, in raw hide for example where a surname bump is heard, it detects that surname bump change and cues everything that depends on it to auto rebuild. So eventually we get to consistency and once all those are successfully rebuilt then it publishes them together so we wind up having like a consistent release of content that works all the time whenever the repos change. And because of all that we don't have to really think about what it actually takes to make sure the software continues to work. You know, this is also true when Red Hat does, you know, point release rebases for RHEL and UBI and those things just kind of move up on their own so we don't really have to worry about it. It's super easy to deploy and get started with because the official plans on openbuildservice.org is just download, boot up, and it starts at the interface and you're good to go, you can just start right away. And it lets us build packages natively for RPM and Debian distributions using RPM spec files. So for Debian stuff, it underneath it all it uses the Debian build tool which takes an RPM spec file and will actually use it as an input for building a Debian package in the same way RPM build does it to produce an RPM package. Next please. Yeah, and admittedly up front we didn't really have modularity support in OBS, but a couple of years back actually I think it was during the OpenSUSA conference in 2019, myself, the DNF team and some members of the Fedora modularity team and the OBS team came together to figure out what we should do to support modules in OBS. And so with that strategy that we figured out the upstream OBS project implemented it and we focused on bringing that back to the stable release and so we back ported it to the latest stable and then as soon as that was released we deployed it in our infrastructure and it allowed us to start actually using modules to build and release content that works on Red Hat Enterprise Linux 8. Next please. So this is a sample from the project configuration for OBS. So some background here in the OpenBuild service you can create projects where they can have and package spaces for where the package sources are stored. It's similar to how it works in copper where you have a project and then you configure builds and packages to go in there. Similar concept. The main difference here is that per project in OBS you actually can adjust how the solver picks things, how content is accessed, you can do macro overrides and stuff and in this case what we're doing is we set the release variable. So remember I mentioned earlier that OBS has this auto reverse dependency rebuilding thing. Well part of that means that it rewrites the release field so that it's automatically incrementing on a rule that allows it to be always considered upgradeable. So we just modified that rule so that it includes the test tag which we actually require for being able to filter and auto publish to the correct repositories. So we do some filtering here to deal with the fact that CentOS Stream and CentOS Linux are just the same thing for a brief time. I think some of that it actually could kind of go away over time. But we have some common module implement that has to be done here so we need the Apache module to be enabled. We turn off test because again if you watch the Fedora ELN talk earlier you'll probably have heard that Steven Gallagher mentioned that for RHEL they actually try to turn off tests in order to remove dependencies that are necessary for running those tests just by the package set. This has the knock-on effect that tests usually have to be turned off for us to be able to build things successfully for the RHEL target. So we do the same. And then we just define repository targets that map to module streams. And then we have from a single project any sources that we push into this OBS project will get auto-built for each of the module streams that we actually care about. So we want to have one for PHP 7.2, 7.4 maybe when 8.0 is available we'll do that as well. And in each of those we expand the flags for PHP 7.2, week 7.4 we modify the disk tag so that information is actually present and we use this so that we can have for every set of sources we upload we just automatically get builds from PHP streams which allows us to... This is a sort of simpler, nicer way of doing stream expansion where we can build layered content on top of all the streams that we care about accordingly. Next, please. So the magic for like pulling all the sources in actually isn't terribly magic. It's a simple script that I wrote called diskit OBS import. It orchestrates checking out the package sources from diskit store and by default it uses OBS and it pushes to an OBS instance for usefulness by default it uses the open source service obviously generally we point it to our system and it's written in Python 3 and it leverages OSC which is the Python based client orchestration tool for OBS PyGit 2 which is library Python bindings for LibGit 2 to work with Git repost and the Fed package to interface with the diskit look aside to fetch all the binary and this is available at PyGrid.io and you can figure it out there's some basic instructions on how to use it and go from there. Next, please. So with all that I'll kind of hand it back to Dan so he can talk about how this comes together and makes for a great experience for us. So we have our we have our specs and sources being synced from Fedora we are building them for all of the all of the modularity streams we need to get them in all the PHP versions we need and here's how this actually looks in the docker file for our PHP base container so we are starting with UBI we copy in a repo file just pointing to the repo that OBS publishes with our packages and we install some we install a PHP interpreter and a couple of other things after enabling the the module that's appropriate for the version we want. So that gives you the base container if you're writing an application this is how simple this is okay I want the PHP FPM 7.4 container and now I install PHP PECO GAML which is an example of a native extension that's it it looks boring and you should be looking at this and saying wait this has all been leading up to this because it is boring it's supposed to be boring it's supposed to be super easy to use in a way that everyone consuming this doesn't have to think about it the only weird part is oh there's an extension that doesn't exist cool please add that to the list of things you're syncing from Fedora and building and just for completeness I said we copied in a repo file this is just a standard repo file pointing to the repo and the key to enable that DNF repo so we are we're we're taking specs and sources from Fedora we're building them for various PHP versions for UBI and then we're installing them into our containers what's next how can we do this even better so the the biggest thing that we're missing right now is the automation works really well if we can build the package from Fedora just verbatim without changing anything eventually we will have reasons to need to locally modify either the spec or maybe add patches to the sources for things that are not that would not be appropriate to upstream and the biggest reason we need to do that is as Fedora moves on it will diverge more and more from EL8 versions of things so eventually it's going to be too new for the rest of the OS and yes eventually we can move our containers to UBI 9 but there will probably be a few year period where we're going to have to patch more and more things to keep them back ported to the rel versions of dependencies and libraries and I mean I have a Git repo I want to modify some things in the Git repo like that's a solved problem but it's a matter of what kind of workflow do we want to present for this and what's the best way to hook into the existing scripts to make that easy and to track all of our changes well so the other major thing is having better control over when rebuilds happen so OBS does a really good job knowing if you rebuild a package you should also rebuild every package that depends upon that package and there are some container builders like the one built into OKD that also does that if you rebuild a container you should also rebuild all of the containers that it depends upon we don't have anything linking these to you right now so we don't have a way of saying there was just an update to the YAML native extension rebuild all of the containers that depend on that so we can easily fake that with just a schedule you run rebuilds regularly but it would be really nice for resource usage and for timing guarantees if instead of a schedule we could do it on demand and that will require some changes to OBS fundamentally we need a way of hooking into OBS to say a rebuild event just happened send a webhook to another system and that's something that doesn't exist in the form we need it yet but it's very addable with OBS's architecture so that is the surprising way in which the work being done in Fedora helps us run PHP on containers using Red Hat if you are interested in the stuff Datto is doing we have an engineering blog we are hiring for a bunch of different positions there's our careers page and if you're interested in some of the code we write here are our GitHub and GitLab works what questions do people have so I see the first question in here that I see is somebody someone asked did openSUSA make OBS yes so SUSA the company is the company that is the primary sponsor of the openSUSA project SUSA is the one that made the openSUSA build service and the open build service software yes they did this person also asked does it work on CoreOS I guess like I don't know what it would be good for the containers will be fine on Fedora CoreOS but it's it's kind of it doesn't really mean anything in the CoreOS in the CoreOS and more than anything else does so let's see another question here why didn't we use Remi's repository but instead we reinvented the wheel so there's a couple of we've had some prior experiences not with Remi's repository but with another third-party repository we got burned enough times because that third-party repository's policy was so aggressive that we wound up losing content arbitrarily and another aspect of this is we want to have much better control of the inputs that we actually wind up exposing to people inside of our container environments and so we wanted to limit the number of self-supported content that people would be using and so we went this route and plus another aspect of this honestly is that we want to actually we wanted to originally do this to contribute to Fedora but unfortunately the architecture inside of Fedora project with Apple and with how Koji and MBS works for modularity it just turned out to not be possible for us to do these builds inside of Apple because you can't build layered modules you can't build a package that is supposed to be part of a layered set of modules on top of another module and so that just kind of made it that strategy fall apart but to be clear we are still asking here for the Fedora ecosystem what would be required to add and integrate this build function that OBS provides to Koji before you move on not as much as you'd think the main piece the main couple pieces that are missing is some kind of way to do an automatic read old counter so recently RPM auto spec was added into the Fedora infrastructure and that allows people to set it up so that they don't need to set or match the release field and that's tracked by RPM auto spec it should be relatively straightforward for someone to extend RPM auto spec to also optionally add a rebuild counter whenever rebuilds happen without a commit being added to the disk it that would allow for the automatic rebuilding to be without breaking conformance or like conflicts with inside of Koji because you can't have two builds of the same NVR the other bit is that you need a dependency resolver service that tracks all the content in Fedora or in Apple or in Rel and automatically kicks off rebuilds whenever things change during a qualified change like a stone name bump or an ABI package or something like that this is not impossible because Koshay already exists and does this to trigger scratch builds it just needs to be adapted to do something meaningful for pushing real builds so it's not like the pieces aren't there in the Fedora ecosystem it's just there hasn't really been any drive interest or investment into making it so that people don't have to do this grunt work themselves in Fedora and I hope that that is something that will change in the future but it will hopefully be a thing that can help make lives easier we are on this kick lately to simplify packaging and making it more approachable and to eliminate as much grunt work as possible and I hope that means that we will address this problem as well because frankly even as someone who is capable of rebuilding pretty much anything inside the distribution it's not going to work either so a follow up about any other questions so I think I answered all the questions in the Q&A sir can other people hear me I cannot hear Daniel he is listed for me right now as the stream is unable to connect due to a network error okay so I'll start talking hopefully now that you know I'm talking so if Dan's been saying something I haven't heard it okay so I don't know if it's my end or his end can people hear me okay so alright so I will reload because apparently I can't see or hear Dan right now alright so the as a follow up to the question about Remy's repo we are effectively a downstream of the fantastic work that Remy does we are absolutely benefactors of a lot of these packages are Remy packages but Remy publishes things to Fedora when they reach a certain level of stability and we wanted to make sure that that the packages we used had at least the level of stability that Fedora gives us and that includes things like Fedora's policies around how quickly security fixes happen that includes basically just we want to make sure more people can work on these things than just Remy and so that's why we had concerns pulling directly from Remy repo rather than um rather than relying on the packages that Remy has published has published basically from Remy repo into Fedora yes Remy does accept that so it was more of a matter of if there is a especially I need to be able to tell our security team if there are security vulnerabilities who is responsible for patching them and on what schedule and Fedora has a very clear answer to that and at the time I was not able to find the same policy for Remy repo which I wouldn't necessarily expect from even more work that Remy would have to do Remy is probably doing a lot of the security patches anyway but it was a matter of what is the formal policy and if there is a formal policy for Remy repo about that and I missed it my deep apologies so I have what you were saying when I was saying it and I have no idea whether I was talking over you and so I'm very sorry it's all good you did talk over me a couple of times but we got it figured out hopefully my internet connection or your internet connection or the internet's internet connection does it flick out like this again because it's been doing that lately but really at the end of the day here what we want to do is to provide the maximum amount of community value by working within the Fedora community because everything we do there has the widest amount of impact like if we if we discover something and we make a fix into a Fedora package that Fedora package gets to the most number of people it goes out and it even can have the possibility of spreading out to places like hell, to other downstreams and it's just from an impact from the value of the impact of actually doing it there it's so much higher than pretty much everywhere else that said I'm pretty sure we have actually made PRs to Remi repo before yeah we've done a few fixes here and there to his repo so it's not unheard of we but yeah ultimately we want to pick the upstream that's best for us to consume and also work with the upstream that makes the most sense for any particular change and you know the other aspect of this is at least personally speaking like the other aspect is that I like being able to see the builds I like being able to see the build logs and I like seeing how builds happen and Remi does a lot of great work and his repository is amazing and I definitely use it as a reference for time to time and do contribute to it occasionally but I don't actually have the ability to see how he builds his packages and that is a concern for me personally like I like to be able to do a trust but verify kind of approach to things and that's very hard when you don't have a way to see how the builds happen not that I'm saying Remi's a bad guy but like the Fedora Koji is public I can go look at any build log and see all the build inputs I can see the record of the build environment it is a lot easier to trace when you have all that information so how do we get red hat build logs then so now we're talking about the whole trusting trust thing Robert check well after a certain point it is I really am trusting that the vendor is good at what they do and when it's a vendor that has been trusted and reputable and works at the community as much as red hat does and functionally almost everything that they do has actually a public counterpart that you could see the integrity of the builds and it's relatively straightforward to identify whether there are any deltas between the two but I'm satisfied for the most part would I like that to be better for sure but I'll take what I get and a lot of the infrastructure work on CentOS stream my personal hope is that that eventually gets us to a place where yeah all that infrastructure is out in the open and we can see red hat build logs why not it's optimistic for that like if you look at CentOS stream 9 it's pretty clear that's pretty much what's happening you know it's a hop stick jump away for having you know the actual logs like that they're like off hand I think the only real delta between the two is is it CentOS release or is it red hat release beyond that like as far as I'm aware based on the process and what Carl George and the other folks in the CentOS stream team have told me there is no difference so we're already in a better place any chance we are working on the PHP 8 available in stream 9 dev funny funny you should ask me about that so the funny thing about that is I noticed early on that stream 9 didn't have PHP 8 and so I filed a bug report saying hey are you sure you want to keep this at PHP 7.4 for another decade because I'm pretty sure you don't want to do that because PHP 7.4's EOL is like next year or something like that and you want to be on the major version of the architecture for the PHP interpreter for the next decade and they're like oh yeah we'll do this and I think it just closed the bug this morning because they finally verified that it was actually pushed like a month ago so to have PHP 8 in CentOS stream 9 so yeah it's there I kind of hope that there will be a bridge stream between UBI 8 and UBI 9 of PHP 7.4 so that it makes it easier for us to wrap from UBI 8 to UBI 9 at least the last time I talked to some folks about how this is supposed to work was that there would be a so-called bridge stream where the stream that was supported the latest stream that supported on REL 8 would actually also be available in REL 9 along with the new defaults and all the new stuff on top that hasn't happened yet at least I haven't seen that content in CentOS stream 9 yet so I'm still crossing my fingers that that's actually going to be a thing because that'll make it so we can more aggressively move from 8 to 9 because it doesn't make it contingent on us doing a PHP bump again it's already right now like completing our transition to PHP 7.4 and I would like for us to be able to do well we're on 7.4 let's take a checkbox in our build system and now we've got all the extensions just auto rebuild for 7.4 on REL 9 and then just switch the base containers and nobody cares that's the dream we switch the base container and nobody cares everything is transitioned over it's seamless and we're in a more secure position because we can ramp up from REL release to REL release honestly it's days that's how fast we can do this if everything's working as well as as well as we hope so any other questions from anyone or am I again not hearing Dan speak no no I think you're hearing me now okay yeah I'm hearing you now and Robert check I do want to thank you for mentioning Remy I woke up this morning thinking oh no we don't have a Remy slide we definitely need a slide where we acknowledge Remy's work in Remy repo and then a billion other things prevented me from getting into the slide so thank you for making sure Remy's work got mentioned because yeah none of this would be I mean we'd find a way to do it it might be possible none of this would be feasible at the scale we're doing it without the work that starts in Remy repo yeah and Remy and I have done a lot of work together over the years as I've ramped up work within data on you know UBI based PHP and stuff like that and even before that when we were working on getting stuff moved on to PHP 7 and then later on to newer 7x versions and things like that I try to make sure that you know I can help wherever I can figure things out and stuff like that and he's been he's been good to work with and he does a lot of great work and it makes our lives considerably easier and I actually out of my own pocket I donate to him every once in a while when I have money to give to kind of keep his services going so you know if you benefit from Remy repo throw him a few bucks buy him a pizza yeah so Garrett Tucker says and I'm going to just quote you because your stuff is long here as you're a product security engineer at Red Hat although the build logs aren't visible I don't know if I fully agree with the statement that there's a little bit less need for the trust but verify for each package or new build review it the idea that you know the fact that there's a whole because there's a whole team and there's reputation there like open source reputation first of all that's pretty much how this works but having having everybody is human and it is important to acknowledge that the more eyes that are on something you know the less likely you're going to have major faults and problems like I can I have certainly caught you know my fair share of things that were kind of you know slipped through the cracks and sent us before and I've sent fixes and they've been fixed and like I'm glad we have sent off streams of pipeline to be able to send those things and get them fixed I don't want to kind of make the I don't want to make the assertion that there's anyone that's infallible not me like nobody's infallible what we can do though is we can do the best we can to help support each other to make sure you know the quality of what we provide is at the highest that can possibly be and while maybe the trust but verify is a tad bit less necessary with red hat because again reputation contractual agreements things like that tend to make this you know like recourse and stuff but visibility and transparency are important to and I think that it's something everybody should strive for even in commercial companies like your company like red hat or whatever there's honestly no reason not to at least be transparent about your bill blogs or any of those other things like what are they going to hurt hmm so it's more about like at a certain level like there's only so much you know I do obviously have to leave some kind of trap for someone else and you know there's ways to be obligation and things like that um but you know it's really just a matter of like I want to make sure that we understand where it's going and that's the kind of thing where it's if it or is nice in that regard and so it makes me feel comfortable with being able to use anything from there because I can easily compare what I'm doing versus what they're doing and see whether I'm doing it wrong or they're doing it wrong and then be empowered to fix it like that's those are the two sides of this coin that matter like you can see something you can say something and because I can do both it's quite great but yeah I really would like to be able to uh build these these PHP modules in Apple and have them in modules that can depend on base modules in route because I would really just much rather do that because it's more beneficial to the wider community unfortunately like the infrastructure around Fedora Apple is not in the greatest of shape when it comes to being able to support the modularity technology and unfortunately I understand why there's architectural things that make this sort of difficult but I'm crossing my fingers that someday that will be resolved although I am kind of impressed that a product security engineer decided to come to our talk that's cool I love you people you're great right at product security sends me all kinds of bug reports to all of my packages like even though they don't have to and they do it helps me make sure that my stuff is good and secure too so I love you guys let's see so we got uh a few minutes left is there anything from anyone else uh or are we just going to apparently talk about right at customer care tickets at the chat now this just makes me laugh honestly this is what conferences are all about it's it's about getting people who otherwise might not talk to each other um talking about things they wouldn't otherwise be able to talk about so I I love that I love that people are able to come together to make a connection oh yeah that was fantastic I just find it amusing that you know also gss slash c e that isn't a brief did we lose Neil or is it just me yeah okay I think we all um I'll message him elsewhere um one moment regardless I think we're pretty much wrapped up we'll stick around for for the next few minutes in case people have more questions or want to chat about I was going to say chat about more tickets but that seems like a terrible thing for me to offer since I have no power to help anybody with those tickets if we have no more questions I um I just wanted to thank the fedora community for making all this cool stuff possible it is it is an incredibly great project an incredibly great community and it's fantastic to continue to find new things that are enabled by the community even though the community thought of them it's like you're making a cool thing we can build on that thing to do something neither of us ever thought of before so um thank you everyone no matter how you're involved in that community