 Yeah, that's the way to do it. At Fasten for Lightning Talks, they have a really annoying alarm clock that goes off every like 10 minutes or 15 minutes or whatever the slot is. Yeah, speaking of time, there are no clocks in this room. So, cool. No, I was looking more for tracking time while I was speaking, but it's fine. What? No, so these used to, I've given a version of this talk before and I had it titled, Building Community with CentOStreme, but this is more of like a historical bent. So that's why the title is different. Tech Comedian, that's, yeah. There's a half cap of mocha at the booth, if you want. No, that is not the solution. There is coffee outside though, it's actually the rink of also. And that's, you need to try different kinds of stimulants. All right, two minutes. There's no rush. Oh, so I'd rather make life easier for whoever's gonna cut this afterwards. Should we grab that less squeaky bottle? Alrighty, I think we're at time, so we can probably get started. Cool. Morning everyone, my name is Davide, I'm a production engineer at Meta, and we'll be talking about Building the Future with CentOStreme. Here's the agenda for today. We'll start with a quick historical introduction of how we've been using CentOS of Meta and how that changed throughout the years. We'll talk specifically about contributing upstream and the work we've been doing in the space. We'll then deep dive into the hyperscale SIG, which is one of the most recent efforts we had in terms of contributing upstream within the CentOS project. And we'll close with a few words on how you too can get involved. So let's get started. As I said, I'm a production engineer. I work on the Linux team right now, which is responsible for all the components of the Linux ecosystem that are running production at Facebook, so the Linux distribution, but also system D and components of the user space and the kernel and all of that. Before that, I was on the operating systems team, which took care of the deployment and maintenance of the distribution in production and the platform that all the other services at Facebook were running on. I'm gonna say Facebook and Meta interchangeably because I'm still not used to the name, sorry. So Meta has a lot of machines. We have millions of servers spread across the globe in several data centers. All of these, and all of these are physical servers to be clear. We don't reuse VMs for production. All of these servers run CentOS of the server on Linux and have been running CentOS since the beginning, already since as far as I've been at the company. I've started in 2012. So before we talk about how we use CentOS, I want to spend a few moments talking about why we use CentOS. CentOS has a lot of very desirable properties in a Linux distribution for a production environment that you may want to run. First of all, CentOS gives you stable releases and stable releases are excellent checkpoints because you can use them as synchronization points in time whenever you want to apply other changes. And I will talk about that a bit more in a minute. Critically, CentOS gives you binary compatibility and that is a feature that is not quite as common in Linux distributions these days, but it is extremely valuable when you're doing deployments at scale. Binary compatibility means that at any given point in time, I can update packages on a production system and whatever is running on the machine will keep running because the updates within a given release of the distribution, so within like say CentOS 8 or CentOS 7, they have ABI and API compatibility. So I can update libraries under the hood and while a package is running and the package will keep running. And like this works like 98% of the time because sometimes people screw up, but for the vast majority of time it works and it means we can effectively apply security updates and any kinds of updates in production on the fleet in a mostly unattended fashion and expect them to just work. Speaking of updates, of course, we have security updates which are valuable because we like security. Because CentOS is part of the Red Hat ecosystem which has been around for a while in this space, it has a lot of very mature and well understood tooling. At this point, we understand very well how to work with things like RPM, with the packaging stack, with all the tooling that are specific to this infrastructure, which is also very easy to build on top of it. Also, because it's part of this family, it can also leverage the extended Red Hat family so we can leverage content from Apple which is the extra packages for Enterprise Linux and we can leverage the fact that CentOS is built from Fedora and make changes into Fedora and see them then trickle down to CentOS. And we will talk about this specifically in a bit more. So over the years at Facebook and at any other environment, really, if you maintain a complex production infrastructure for a while, you will notice that you run like 80, 90% of stuff as it comes from upstream, but then there's something that you need to make changes of, be it because you need to update a package internally or you have internal packages you have to add or you have miss independences and all of that. So the way we handled that up to a few years ago was that we would keep internal backwards of packages that we cared about that were tracking upstream more closely. So we have packages like SystemD where we do a lot of development upstream and we would want to follow the upstream development closely instead of being pinned to the stable releases that CentOS provided us. So we take this backwards from Fedora-Rohi which is the development version of Fedora and we maintain those internally and then later we realized that there wasn't really much of a point of keeping these internal only so we put this on a GitHub repo which was that repo. And we've wrote this in a way that we gated the Facebook specific changes and there are macros so that if you too wanted to build this package you could use them around them in your infrastructure and you wouldn't get like our NTP servers and stuff. We call this internally FTL, a fast thing layer and this gave us the ability to effectively have a distribution that was stable but it was also moving as quickly as we wanted. And we mostly apply this approach to lower level packaging in the distribution. So as I mentioned, SystemD is things like the user space plumbing. The other thing that we did is that we do not run the stock CentOS kernel. We never really did. We've always run our own kernel build primarily because we have a lot of kernel developers who do a lot of kernel development in-house at Meta and it is a lot easier to just run the upstream kernel because that's where the development is gonna happen anyway. That's where the changes are gonna be merged rather than running a vendor kernel that has a lot of backwards that then you have to maintain and ends up being a very different beast. We also make a number of policy changes on the kernel side where compared to the distribution we run, for example, we see R2 by default. We use MatterFS extensively at Meta and MatterFS has been the root file system for the entire fleet as of CentOS 8. Yeah, it was with CentOS 8 that we flipped it but we've been using it even before when we were at the fleet mostly on Cent7. So this is an example of what we call a policy deviation. So it's something that we consciously make a choice to put something in production that is different from what the distribution itself is doing. Other examples are for IP tables for a variety of reasons, we don't use NF tables in production in Meta, we want to still use the legacy IP tables backend so we rebuild the IP tables packages with that backend enabled. On the networking side, upstream CentOS has been using network manager for a long time. We don't really use network manager in production. Historically, we've been using network scripts since Cent5 and earlier and that's what we kept running until effectively today. Right now we're in the process of moving for network scripts to system D network D. That is what I expect we will deploy fleet-wide with CentOS 9 and we're in the process of refactoring it and running it out to a portion of the CentOS 8 fleet as well. Now let's talk about upgrades. As I mentioned, I've been at Meta since 2012. When I joined in 2012, we were running CentOS Linux 5. We didn't really have a process at the time for doing OS upgrades so we kind of stumbled through it and managed to eventually update the fleet to CentOS Linux 8 and that took a while. I sent CentOS Linux 6 and that took a while. When we did 6-7, that was a big transition because 6-7, if you remember, was the one where system D was introduced and that meant having to convert all of the services we had internally where at the time we had like five or six different in its supervision systems for services and having those come, yes, Neil. I have another talk on that if you want but it's not this talk. And it was converting all of these onto system D because it was a great opportunity to get rid of a lot of technical depth and craft and just use a unified system. That was also the first time when we started actually actively engaging, I would say, with upstream and trying to work with folks. We worked with folks on Anaconda, for example. At the time we were using Anaconda as a system installer. We started engaging directly with the system D project and contributing a number of changes that were informed by our production deployment. More recently, we've been migrating the fleet for CentOS Linux 7 to CentOS Stream 8. And right now we're in the process of starting the 8 to 9 migration. We just finished qualifying CentOS Stream 9 in the past few months and we've kickstarted the mass migration earlier this month. I expect that we'll take the better part of this year and the next. These things usually always take a while. The reason these things usually take a while is because when we do major OS upgrades we do food rep provisioning of the system. So we wipe the machine and we reimage it from scratch as if it were a new machine. There's a few reasons why we do this. So first of all, technically in-place upgrades aren't supported for CentOS. There are ways to do in-place upgrades with CentOS and if you actually try it, it will mostly work in the vast majority of cases. There's also tools that you can use to make this work better. There's the Elevate tool that the Almalinux folks have. There's the Leap toolset that the Red Hat maintains. However, we don't really want to do in-place upgrades because when you do in-place upgrades it means you're carrying back all of the stuff you had previously on the system. And it is a rare opportunity where you can have a clean slate and you can choose to actively deprecate things and switch to new things at the same time. So we generally treat the OS upgrades as synchronization points where we can also couple a number of other things with the upgrade itself. So usually we will switch the default kernel to a more modern kernel that has features we care about and because updating kernel implies a reboot, well, you're rebooting anyway already if you're re-imaging the machine, so you might as well do that. We've used this when we switched the root file system to ButterFS because you were gonna re-image the machines anyway. We've used this in the past to also deprecate various things, various internal services because at the time it was a good opportunity. And because this leverages the general maintenance windows we have, it can leverage all the existing automation. The way we do this in practice is that service owners, oh, and to be clear at META, the general understanding is that if you run services in production, you own the machines, that you're running the services on production and you're responsible, you're generally responsible for doing these migrations yourself. We will provide you the tooling and we will give you deadlines, but the actual mechanics of the migration are gonna be up to you. And what people have been doing in the past, this was a fairly manual process, but as of half a cent seven and beginning of a cent eight, we had really good tooling for doing these in an automated fashion. So people can effectively say, I can lose these many machines at this rate with these physical constraints, fire, and then they'll start and the automation starts churning through machines, takes them offline, reprovision them, puts them back online, checks that they're healthy, puts them back in production and so on. So it's a fairly hands-off approach as long as there aren't any regression or any issues, of course. And we've been fine-tuning this tooling. This tooling is unfortunately very tightly coupled with our infrastructure, but you could build something like that out of open source tooling that's nothing particularly earth-shattering in it. Now we don't do reprovisioning for everything. We only do that for major OS upgrades. For minor OS upgrades, we just do incremental updates by effectively running the NF upgrade like you would expect. We snapshot the young repositories, so we always have a stable point in time that we can reference. We do snapshots every two weeks of all the repositories upstream. We started doing this process before composites of the distributions were a public thing. Nowadays, if you were to implement this from scratch, you would probably just want to use the CentOS composites that they publish. And if you're not familiar, a compose is a... Basically, the distribution is put together and packed in a compose, and it's an installable set of the entire distribution. And they publish these between every few days and every few weeks, depending on the development cycle upstream. So that's a good synchronization point if you're building something like that. In our case, we just take snapshots of the library pos every two weeks, then we roll them out of the fleet across two weeks. We do the actual role of using Chef, which is our configuration management system. So Chef has some logic in the codebook that manages DNF and YAM. It has logic to figure out, okay, I am on version X, we're moving to version X plus one, update the repo definition around the DNF update, check that it's fine. Of course, we have ways to monitor that this is going well. We have ways to stop it. We generally start the role of a non-variable portion of the fleet and can array it up. But because of that binary compatibility guarantee I mentioned before, this is more or less a hands-off approach. Generally, whenever we do this, we will spend a day or so at the beginning to check for conflicts, check for issues. Oftentimes the issue you end up is because of some internal packages that has messed up RPM dependencies that we have to fix. The upstream stuff usually works as far in. And then you apply it and you roll it on and you move on. Now we should talk about containers as well. Everything I said so far has been about bare metal. The reality is that these days a lot of production workers don't run on the bare metal directly. They run on the container platform which itself runs on bare metal. The container platforms and meta is called Tupperware. It's an internally developed platform. The development started long before containers were a thing or at least a widely used thing. But they use all of the traditional container technologies you might expect. So C-groups, namespaces, we increasingly using more and more features from SystemD for the container management itself and in fact the container agent that we run on the host is primarily based on SystemD nowadays. While Tupperware itself is not open source, a lot of the components are slowly being rebuilt on top of open source tools and as part of this process we're improving the corresponding tools. The containers themselves also run centers. In fact, they run the same centers that we use for the bare metal hosts. We build the container images from the same repositories we use to build the production systems. We have an internal tool for building container images called Antler, but it's about the same as any other container image build tool that you might expect. The good thing and bad thing about containers is that because they decouple from the host, the update cycle doesn't have to be in sync with the host. So you often end up with having, say, send of seven containers running on send of eight hosts and vice versa. This is good and bad in that it makes updates easier because you can carry on all their living containers. It's also bad because you often end up with stragglers and sometimes it does matter. The difference does matter if you're, for example, transferring data back and forth. And in the case of containers, obviously we don't do in-place updates because they're containers. It's a lot easier to just tear it down and bring it back up with a new image. Okay, so what I talked about so far is roughly what we had been doing up to 2016, 2017. Over the years, this approach worked, but we definitely found that there were, there were things, there were issues. It was tricky at times to do this nicely. For FTL specifically, because we were maintaining these many backports internally, every time you backport something and you maintain it yourself, you're effectively forking it. If you fork something, you have to maintain it forever, or at least throughout the lifetime. And because these are forks, you don't really have a good solution to upstreaming the changes you're making. Well, you can send PRs, of course, but it wasn't really clear. And what we do, then, the doing is that things would diverge quite a bit, especially for leaf packages that people may not necessarily follow super closely. Oftentimes what you see is that someone backports some random library because they need it for their tool, but they don't actually care about the library. So they just backport it once and then there's 25 CVs on it. And I was like, cool, now we need to fix that. Also, we were putting these on GitHub, but there were just a bunch of spec files on GitHub. I know a few people were using a special assistantly backport, but they weren't super usable. And at the same time, as I mentioned, whenever things were fixed upstream, we would have to manually integrate those. And we will also have to manually integrate whenever the distra would update or pin our version so they wouldn't get shadow. Same thing on the policy side. Whenever we made that decision to change something, we then had to deal with the consequences. And the main issue here is that whenever you deviate strongly from something the distribution is making, you run to there is not being able to effectively report bugs in a useful way because the bugs you might end up encountering might be specific to your setup and they might not apply to the distribution. So you end up having to double rate for everything. And we also didn't really have a feedback loop on whether the choices we were making were useful or were things that others could benefit from. So now let's talk about how we can do better and how we can do better by engaging more closely with upstream. I don't think I need to explain to this crowd the benefits of working with upstream, but I'll do it anyway for the benefit of the wider audience. The main thing we've discovered over the years, well, discovered. I don't think this is particularly a discovery, but the thing that I think is important to realize is that whenever you're working with projects that have a well-established and wide community, you are not gonna be the one that is doing the impactful work most of the time. You're not gonna be the one that is setting the direction. The community is where the work is gonna happen. It's easy, oftentimes, especially if you come from a large company, to think of everything you're doing as awesome and everything you're doing as the cutting edge. But the reality is that for a lot of the project, the cutting edge is what is happening outside. And if you want to be a part of that, you have to work with the people that are doing this on their ground, work with the project and make things better. Also, for something, if you think about something like a Linux distribution, there's gonna be a subset of it that you really care about, that you want to follow very closely. But there's also gonna be a lot of things that you might tangentially use, but that you really don't want to maintain yourself. You don't have the expertise, you don't care. Think about, for example, if you run parts of the library office for doing some production workload. Do you really want to maintain that in-house, unless you're an expert of it and that's part of your core business? The benefit of being able to do this work upstream is that you can leverage what the community is doing. You don't have to do everything yourself. And whenever you fix something, others can benefit from it and vice versa. At the same time, whenever you write something new and if you make an effort to open source it at the beginning, then everybody else can contribute and give you feedback. Over the years, we found that by far the best way to do this is by just showing up to the community. If you show up to the community where people are doing actual work and engage with them as a peer, go there, solve real problems, real engineering problems, that's how you generally do work in this space. That's how you do space. That's how you become a real member of the community. That's how you can build trust with the community and gain a better understanding of what's going on. So let's talk about specifically Sentos for now. And to talk about Sentos, we need to take a bit of a history detour and look into the sausage-making machine of how Sentos was created over the years. So there's various players to these games. There's Sentos, there's Fedora, and there's Red Hat Enterprise Linux. And to be clear, I don't work at Red Hat. I never worked at Red Hat. This is my personal understanding of how this is. So Fedora is the community distribution that is maintained by Red Hat that tends to be close to the cutting edge. It's what you might be running on your laptop right now. Red Hat Enterprise Linux is the product that Red Hat, the company, sells, which is an Enterprise Linux distribution. And Sentos historically was a rebuild of Red Hat Enterprise Linux from sources. So the development process roughly went like this. They would take the version of Fedora at the time, which in the case of Sentos Linux 7 was Fedora 19. That was a while ago. Internally, at the beginning of the Velsackle for RHEL, they would take Fedora 19 and snapshot it. And this would get snapshotted into an internal staging distribution that Red Hat would maintain. Kind of like a primordial soup that they would use to slowly stabilize the distro and turn it into a commercial product. When this was ready, they would release Red Hat Enterprise Linux. And together with it, they would release the sources. Then a different group would take the sources, rebuild them from scratch without using any of the existing infrastructure and release Sentos Linux as the product there. So if you're as and you're running Sentos Linux in production, you would then take Sentos Linux and apply it. Now let's say you're running Sentos Linux 7 in production and you find a bug and you really want to fix it. You fix it internally, but you would like to fix to go upstream, so you don't have to keep maintaining it forever. What can you do about it? So your options are kind of limited in this world. You can't really contribute to Sentos because Sentos is just a rebuild of RHEL. There's no, the only real meat in there is the build and the build process that's rebuild the distribution. There's not really anything else. So you can make any changes there. You can't really contribute to RHEL. For starters, RHEL is a product. Even if you are a customer of Red Hat, they didn't really have a way to send PRs. There wasn't a development process. There wasn't a git forage or anything. You could file issues on Baxilla and maybe they will look at it, maybe not. Maybe they will take the patches, but your options were kind of limited there. And even if you did get your patch included, it would be including to the next minor release, which might not be coming for a while. You obviously can't do anything about the staging distribution because it's internal to Red Hat. The one thing you could contribute is Fedora. And while you could absolutely do that, if you managed to get your changes landed in Fedora, they would only impact the next major release of RHEL and Sentos, which would be years off. So you were kind of stuck. You effectively had to maintain a lot of stuff internally. With 8, things changed a little bit. With 8, the process was similar. They started from Fedora 28. They branched it into an internal distribution, which I didn't put in the slides because it didn't fit. But then when they released Red Hat Enterprise Linux 8, they also released something new called Sentos Stream 8. And the idea is that Sentos Stream 8 is a continuously delivered distribution that is developed in the open. And that's where changes land first. And then changes from there will land to the next minor release of RHEL. So it is effectively the distribution that Red Hat used to have internally for developing the next minor release of RHEL before it went out. If you're running Sentos of RHEL, you probably remember that you had a base repository and you had an updates repository that was getting a trickle of updates. That's where those updates would come from, effectively. With 8, these updates just show up in Sentos Stream after they went past the CI CD pipeline and everything. But the new thing here is that all of the sources for Stream were easily accessible in one place, and there was a contribution process. So you could actually send PRs, you could send bug reports, you could get this merged, you could apply changes to the distribution, and they would land and they would land in the next compose. So if you found a bug in some fringe package, I don't know, Nmap or something, you could send the fix upstream. And if the fix was deemed acceptable, because obviously this is like any other project they can choose to take or not take your contributions, but if the fix was deemed acceptable, it would emerge, and then you could just apply it. And this massively reduces the burden of having to maintain things internally because you don't have to, you can just get them out. With 9, which is where we are now, things have changed even more. So with 9, we started from Fedora 34, which is not that long ago, actually. And in addition to release in CentOS time nine, there is also a new distribution in the middle, which is called Fedora ELN. And Fedora ELN is effectively the staging distribution that I was mentioning earlier, but public. Not only but public, but continuously updated at any point in time. So the idea with ELN is what if we took the development version of Fedora today and made well out of it with it every day. And it is something that exists that you can install today and that gives you a window into effectively what the next major version will be. Then, of course, you have CentOS Trim, and the other change with 9 is that they dropped the CentOS Linux rebuild. So the only deliverable that comes out of the CentOS project itself is CentOS Trim nowadays. So to recap, there are many avenues that you can use to contribute in this ecosystem if you want to be involved. You can work at the layer of Fedora. Fedora is what influences what goes into the next CentOS Trim major release. It is a great place. If there are new technologies that you would like to see adopted, if there are things that you are developing on, especially if there are things that are relevant to the whole ecosystem, it is a great place to do this work. It is also a great place to maintain packages if your company or your environment has open source software that they care about, making sure that the software is packaged and maintaining Fedora is a great way to gain user adoption. One thing I wanted to stress, in particular, is the change proposal process. Because I think it is a very good process for managing change at scale into the distribution. In the case of Fedora, the way this works is that whenever anyone wants to change something notable in the distribution, they have to do the actual work, obviously. But then, in that proposal, it is approved, the changes are implemented in the distribution and become available to everyone in the next release. And over the year, we've leveraged this process quite a bit and there are several changes that we landed in Fedora. To be clear, this isn't just work that metadata. All of these things had major contributions from the community, both from other companies and our community partner that helped us. For Fedora 33, we landed ButterFS support by default. And as of Fedora 33, Fedora ships with ButterFS by default as their root file system. With 34, we shipped ButterFS with the standard compression by default, which made it more efficient on solid state drives and other environments. With Fedora 34, we also shipped SystemD-UMD by default. SystemD-UMD is a user space out of memory implementation that leverages some new kernel features that can more or less predict the future and figure out where your system is about to go out of memory, but before it actually goes out of memory so that the applications can react before the kernel-UM killer is unleashed. This is something that was developed at Meta within SystemD upstream to SystemD and then deployed to Fedora. 35, we shipped ButterFS by default also for the Fedora Cloud Images. With 36, we moved the RPN database to user, which helps for some snapshotting use cases. For 37, right now, we have a few changes in the works and for future releases. I'm not going to read all of the slides. If you're interested, you're welcome to ask me questions, and I'm happy to go in details. If any of this sounds interesting, you're also welcome to help out, obviously. We can always use more help. And if you too would like to make changes in Fedora, this is how YouTube can do it. Now let's talk about the PEL. I mentioned the PEL briefly at the beginning. So Fedora releases packages for its own distribution, which is Fedora. But a lot of these packages are useful also in other environments. With the PEL, a subset of Fedora is rebuilt and targeted for Enterprise Linux so the packages can be used on REL and on CentoStream. We use this a lot because the set of the base packages for Centos is pretty small. There's a ton of other packages you may want to use in production. You really don't want to maintain them internally, so being able to leverage the packaging that is not in Fedora is incredibly useful. In more recent years, there's been an effort to try to make this process more streamlined. Historically, whether a package was branched or not for a PEL was really up to the maintainer of the package, which was generally a single individual human that may or may not care about it. And these days, we try to set up a process so that packages can have more of a collective maintenance if needed so we can share the burden. A lot of the time packaging something for a PEL isn't terribly challenging, but it's also not terribly interesting. You're basically merging all the changes from the other branch doing the build, checking that the build works cool, that there's rarely any challenging work to do there, so it makes sense to have more of a streamlined approach to these. We also did a lot of work on tooling. One of my colleagues, Michel, is working on a tool to automate the process of branching packages so that it will be easier to add them in the future. And we set up a special interest group to generally manage the collective maintenance of a PEL more easily. I mentioned ELN earlier as well. ELN, as I've said, is a continuous rebuild of FedoraRide, but using the CentOS macros and the CentOS toolchain. So the end result that you get out of it is effectively CentOS, but if it were built against today's sources of Fedora. This gives you a window in what the next CentOS version could be. So if you look at ELN today, ELN is what will CentOS 10 build be in a year and change when CentOS 10 will be branched and come out. There's a special interest group for ELN as well that we've been engaged. Most of the work there has been around making ELN easier to consume. ELN produces installer images, it produces a set of repos, and also how we could extend this to more than just the basic distribution. One idea that we had early on was that we could use this to also make life easier for a PEL because, well, if you're testing packages for the new version of Stream, you might as well test the set of packages that are also going to be branched for a PEL while you're at it. So this is covered by something called the ELN extras, which is effectively a set of additional packages that wouldn't be part of the distribution itself, but are still tested and composed together with it. And Mada, we are using this in a variety of way. For starters, there's a number of open source projects we maintain that we want to make sure keep working in Fedora and in CentOS and stay packaged properly. We do this by having a CI-CD pipelines in these projects upstream, the leverage packet, and packet delivers us RPM builds for various shoots, and one of these shoots environments is Fedora ELN. And not only delivers us this build, but it delivers us nicely formatted repos. So if you want to test today's build of, say, the below resource control system, resource control demon with Fedora ELN or whatever, you can just get the packages from there. This is done through a system called packet. The other thing we do is that for packages that are, as I mentioned before, in a PEL that we want to track. We have a workload in counter-resolution so that they're branched for ELN extras. These are either packages we maintain or packages that somebody else maintains but that we want to track closely. Internally, we're also in the early stages of establishing a CI-CD pipeline using ELN. The idea is that whenever we do a new CentOS major release qualification, there's a period of several months that we have to take to the qualification. And it'd be real nice if we could spread that instead across. And if we had a way to deploy ELN to the fleet and run a production workload on it throughout his life, we could spot issues very early on and either work on them internally if they turn out to be issues on our side or work with upstream to see if we can maybe make things easier and better for everyone. Moving on to CentOS, CentOS Stream, as I said, is a continuously-delivered distribution that tracks the next minor release of REL. So if right now we're at REL 8.34, I don't actually know, this would track REL 8.5 or X plus 1. You can file bugs for CentOS Stream that way. The reason I put that on the slides is because I find it entirely non-obvious because you have to actually file the bugs for REL but putting CentOS Stream as the version. The older sources for CentOS Stream are on giddosentos.org. And by sources here, I mean all of the package specs. The way the CentOS community is governed is via special interest groups. And SIGs are really the building blocks of the community and that's where all of the interesting work and development work happens. And I will talk about SIGs a bit more in a second. For Stream 9, it's more or less the same, as you can see, these slides are pretty similar except the sources nowadays have moved to GitLab. So if you go on gitlab.com, right at CentOS Stream, and you can use the GitLab MR workflow, which you might find a little bit easier than the previous one. The builds are on Koji in the same way. There's a different Koji for reasons for 9, but it doesn't really make a practical difference. And that's the link where you can find the daily composites if you would like to try them out. We use this process. I mentioned before that there's now a process for doing contributions to Stream, and we actually leveraged this quite a bit for the 9-developed cycle. Because of this, we were able to lend a variety of changes into CentOS Stream 9 long before REL 9 actually came out. So these were all changes that were landed in CentOS Stream 9 between the time it branched and the time REL 9 was out. This was great, frankly, because these are all things that we would have had otherwise to do ourselves that maybe would have taken months, two years to get into the distribution proper, and this way, they were just there. And we plan to continue doing this in the future. To be clear, again, this is not just work that we had met at it. This was the result of cooperation between us and members from the community like Neil, who is smiling on the second row, that helped us a lot through this process. Now let's talk about SIGs, and specifically about Hyperscale. The Hyperscale SIG is a special interest group that we founded in January of last year. The idea behind Hyperscale is to have a place for companies and engineers that want to work on large-scale infrastructure to work together. It focuses entirely on CentOS Stream, and in general, what we want to do is to have a place where all of these companies that do tooling development and deployments in-house, they may end up kind of reinventing the wheel themselves all the time. We want them to cooperate on this together in a place that is in the open. So bringing all these in-house development out so that folks can work. These are folks from Metta, from Twitter, from Datto, from a variety of other companies. Oh, Intel as well now. But you don't have to be in a company or in a company of any kind or in a large company to be a part of this. You can also just be interested. And while these nominally target large-scale deployments, the realities that a lot of the work we're doing here is hopefully also useful to smaller-scale deployments. There's a few links there on the Splash page of the SIG and our user documentation. We hang out on IRC on Libra and the room is bridged on Matrix. Most of us are in US Pacific time, but there's usually someone around all the time. So you're welcome to join and ask questions if you would like. Primarily, we do a few things here, and I'm going to try to go over quickly the main things we have. The main deliverable of the SIG right now is what we call faster-moving package backwards. So I mentioned earlier this FTL thing we used to do internally at Meta. This is effectively the same, but done properly and done in the opening away that other folks can contribute to. So we deliver updated backwards of distribution packages, either with new features enabled or that follow-up stream more closely. And the idea is that if you're running send-to-stream 8 or send-to-stream 9 and you would like a more updated version of a package, you can install the version that we provide in hyper-scale. And the distribution will keep working just the same as before, just better. These targets, stable and production use, and these are the same package that we are running at Meta in production. If you have a send-to system right now, you can get this by doing DNF install send-to-stream hyper-scale, which will enable our repos. And then if you do DNF update, this will update all of your packages to ours. If you don't want that to happen, you can use version lock or other solutions to pick and choose what you want. There's a lot of packaging in here. I will not rate through the list, but you can pull up the CBS tag to see it. CBS is the community-built system. It's where builds for packages that special interest groups work on happen. Most of these is what we call low-level system user space, or Linux user space. So things like Utah Linux and RayCut. Packages that allow you to do either hardware enablement or enablement for new features or support for newer kernels and things like that. I want to talk specifically about system D because I think it's a great example. We've been maintaining a system D-backport for several years at this point that was tracking the latest production, stable release of system D. This is what we've been running in production at Facebook or at Meta for years. And it is based, as I mentioned, before on the Fedoroid packaging. You can see this on the on-githosentos. You can see the spec files for these. We maintain these in its own Git repo as well on Pager. So this is easier to track patches and to see what changes have been made. But mostly these tend to be verbating imports of upstream releases with the occasional backfix on. We also have a CI-CD pipeline where every day we take the latest Git master of system D and we build it against our packaging and publish it. This makes it really easy to spot the regressions in the build system. It also makes it easy if you want to test a one-off feature that was just merged into system D and see how that's working out for you. You can just grab this package and install them. This uses the Sentos OpenShift CI environment. One of the things that Sentos provides, special interest groups, is access to an OpenShift environment where you can run effectively arbitrary container jobs. This is great and it's made our life very, very easier, very much easier than having to re-implement this in-house before. The other set of changes that we do, as I mentioned, is what we call policy changes and configuration alternatives. And the idea here is that we will release packages that have different defaults, but should still be usable and should still not apply negative changes to the distribution. So the traditional example here is IP tables where we ship a modified IP tables package that also has the legacy back-ended enabled. So if you don't want to use NF tables, you can use that. Now, everything I mentioned so far are things you would want to run in production and you should be able to run on your production machines without any issues. The other set of things we work on are experimental changes. Since you may want to run on your production machines, but you don't necessarily want to deploy everywhere. A lot of the time where you're doing development of these throw-wide features, it is really useful to be able to test and deploy those in production. But it's often not just a matter of building one package, it tends to be a set of interconnected changes. So a good example here is the work we've been doing for a while on DNF and RPM copy on right. That's its own talk altogether, but the short version of this is that it's a set of changes to the RPM packaging stack that allows it to leverage the copy on right features that modern file systems like BaderFast provide to make package installation a lot more efficient. This requires changes to RPM, DNF, the entirety of the stack. This is something we run in production at Facebook now, but it's something you may not want to run on your environment until it's stabilized a little bit more. It's also not in Fedora yet. We have been working on a change proposal to get it in Fedora. But if you want to try this, it is also available in Hyperscale. You can install our experimental repo and that will give you access to this set of packages that you can deploy in production and use or in your test environment if you prefer. In the same repo, you will also find our kernel. For several months now, we have been building 514-based kernels as part of Hyperscale. We use 514 because 514 is the kernel that's shipped in CentOS Time 9, so we can have the same version on both 8 and 9. This is effectively the same kernel that's shipped in 9, but it has a number of extra features enabled. Notably, ButterFS. So you can use CentOS with ButterFS and even install a system from scratch on a ButterFS file system. To be clear, this is not the kernel we run in production at Meta. It is a kernel that is based on some of these things. Longer term, I would like to see kind of a closer... Bringing this closer to the kernel we run internally in vice versa, but we're not quite there yet. In addition to the kernel, we also maintain what we call the kernel user space, so package it tightly coupled with the kernel that you may want to update at the same time. So things like ButterFS Browse and CompSize, which are ButterFS-specific tools, Istool, which is often something you want to update for hardware support, and Kpatch, which is used for the kernel live patching environment. We have a lot of other things we're doing in Hyperscale, and I did not put a full list here. A couple of notable things I wanted to mention is that we have container images, so if you want to play these very quickly in your container environments, you can pull the images. These are minimal container images that were built from scratch using Builda. They're not based on the official CentOS images. The reason being because the official CentOS images are based on UBI, and that carried a bunch of baggage we didn't really want to deal with here. Also, you can use the build scripts if you would like to build this on your own. There is like a 20 line bash script, so it's nothing particularly fancy. We've also started maintaining live media spins. So if you've installed CentOS recently, when you install CentOS from the DVD, it boots into Anaconda, and you install Anaconda, but you don't have a live system. If you install Fedora, you get a nice live desktop and everything. The idea is to have spins, install their spins that look like the Fedora ones, where you have a live desktop, so you can try the system and play with it with all of our features baked in and to enabled. So you boot into a system that has our system D, it has support for ButterFS, it has up-to-date packages, so you can both use them if you need to, say, rest your live system, but you can also install this since the beginning with ButterFS and all of that. Right now, this is only released for CentOS Time 8 because we haven't updated the build system yet, but we are in the process of getting this out for nine as well. There's many things we would like to do that we haven't started yet. Cloud Images is the first one, a way to leverage transactional updates with ButterFS, better testing infrastructure so that we can have automated notifications whenever there are issues with all these packages. If any of this sounds interesting, if you would like to help, if any of this sounds fun, please reach out. We can definitely use more people. If you deploy this into your infrastructure, I would also love to hear. It's always great to hear user stories and what people are doing with the things we build. Now, as I mentioned, I promise you a few pointers on how you too can get involved. All of these communities are very easy to get into. These are very welcoming communities. It is very easy to get started and there's a sliding scale of things you have to do. You can do to work in that and bring a benefit. You don't have to necessarily do coding work. You don't have to necessarily do packaging work. Even just working on something like documentation is incredibly valuable and something that will be welcomed. The main point where the CentOS community gathers is the CentOS mailing list, which is CentOSDebalonCentOS.org. That's what you would like one to subscribe if you want to follow what's going on there. Most of the mailing list is mostly for asian discussions, but most of the, I would say, actual work for CentOS stands to happen in meetings. Meetings, although they often either IRC meetings or the occasional video meeting, these are all open. Even meetings for like six you are not a part of, they're all open to the public. You can join them. You're welcome to introduce yourself or just listening if you would like. You're in particular welcome to join our meetings that we do for Hyperscale. We have IRC meetings average weeks and we have a monthly video meeting that's more of a social gathering so that people can see each other face-to-face. Also, there are a lot of six, not just Hyperscale. So if any of these work sounds interesting, you should browse the list and see if there's anything there that strikes your fancy that you maybe want to contribute to and help. And then, of course, you can and should file bugs. You find them. You can maintain packages in the app or in Fedora. Fedora has excellent documentation on how to get started and contribute. You can start from that Fedora Magazine article, which I think is a great entry point. I, myself and my colleagues, have been in several talks over the years on this subject and related. We tried tracking them on the link at the end of this slide. So if any of these sound interesting or if you wanted more details on specific things like system D, I'd encourage you to check that out. Finally, if any of you happens to be in Boston in two weeks, we will be there. There are gonna be several events at Boston University between August 16th and 20th. There is a Hyperscale meetup on August 16th where a number of us would be in a conference room somewhere talking about Hyperscale, seeing if we can drive things forward and probably getting some work done. On the next day, there is a Santa's dojo. Most of, if you're involved in Santa's community, dojos are smaller scale conferences where folks present the work they've been doing. The past few dojos have all been online because COVID, obviously. But this one is in person. So it's a great opportunity to meet people face to face and see them again after a few years. And then there is DefConf, which is a great conference that is sponsored by Reddit. That will also have a lot of talks relevant to the Fedora and Santa's and Reddit ecosystem. That's also at Boston University. That link is a link and the QR code is a link to the latest Santa's newsletter that has references to all of these things where you can find where you can sign up. All of these events are free to be clear. There's nothing to pay. You just have to get yourself to Boston somehow. That's all I have. I will be happy to take any questions if there's any time left. Enterprise Linux Next. It is not a good short-hand. Don't Google that. Yeah, or that. Oh, yeah, right. No, that's actually probably what it meant originally. Yeah. Yeah. For the general answer is yes. I would say if you are running ButterFast in production, I would recommend running a recent kernel. A lot of the time when you see folks reporting having a really bad experience with ButterFast online, they will be running a kernel that's, say, older than 5.2. There were a lot of improvements made to ButterFast in the more recent years. So I would say if you're running a relatively recent kernel, you should have a good experience. We run ButterFast everywhere in the fleet. We don't really see any, like, I don't think we've seen corruption issues in a long time, to be honest. Yeah, we, at most, we would see, like, per-related stuff that can come up occasionally. And there's, of course, like, ongoing work to optimize specific parts of the file system. The other thing I would say around ButterFast, there are features of ButterFast that are more suitable for production and features that are less suitable for productions. I would not run the 8.5.6 code as it is in the current state in production, but it also puts up a gigantic scary warning right now if you try to use it. So, yes, and we disable it purposefully in Hyperscale, because we don't want people to use it. There is active work to fix that for what is worth. So I expect a year from now that should also be sorted out. Yes, that is a very good point that Neil is making. One thing that is important to realize that ButterFast does, because it does effectively continue to check summing of all the data, it is able to detect issues much earlier than other file systems. So if you're running something like extended four and you happen to have sighting corruption because your disk drive controller is randomly flipping bits, you may not notice until you actually read the file and see garbage. With ButterFast, you would start seeing that Dmesh spew that will tell you that your data is going away, and that's a good signal to put your hard drive in the transaction or start from backups. But if your hardware is on the fence, like sometimes you see these on consumer laptops where the memory isn't fantastic and they don't use ECC, so they might have bit flips every once in a while, so they're mostly fine, and you don't really notice if you're using them for just browsing in Firefox or something, but with ButterFast, you will see it immediately in Dmesh and it will complain loudly because your data isn't safe. Sure, I will put the slides online. I haven't figured out how to upload them yet, but I definitely will. I also generally put these on Slideshare, so you should be able to find them fairly easily. If you can't find them, feel free to email me. No, but I figured I can just Google my name and it should be easy to find. Um, yeah. Um... Decavalka at a bunch of different domains usually works. Anything else? Yes? What do you mean? Oh, I see what you're talking about. So the question is about how do you ensure in a distribution like Fedora that builds everything from the same set of packages that if upstreams has a preference for a specific version of a library or a subcomponent, how do you deal with that delta? Did I get this right? Okay. So the reality is that you kind of don't, like the distribution is built all from the same set of packages. In the vast majority of cases, this is fine. It might mean that if upstream has a preference for a specific version of a library, they won't get that. They usually will get a more up-to-date one. This is generally fine. In the situation where it isn't fine and they really need to have a specific version, there are ways to package multiple versions of a library and make it dependent. The downside of doing that, though, is that these things have security issues oftentimes. So if I opinion you to, like, I don't know, an old version of like, but the bind libraries or something, and then there's a CD for those, we're kind of screwed. So usually we need a really good reason for it. What tends to happen more often is that some projects will fork and modify internally other components and, like, bundle them together in a way that you can't really reuse the other one. They've had to be forked at the library for themselves. In that case, yeah, you just end up bundling and dealing with it and hopefully doesn't have too many security issues. This is a distribution problem in general. This isn't a Fedora-specific thing. It's an issue for pretty much all distributions that use a single stream of versions all together. Did I answer your question? Anything else? Going once, going twice? Thank you very much, folks. How do I turn this off? Okay, can you hear me? Okay. I don't know how this will... There we go. Should be right. The problem is the support part. This is it, all right. Good. Okay, thanks. I'll start since it's 1.32. All right. Thanks for coming for my talk. This is the Mono Lake story, how the circular economy and avials open hardware. And so let's hope a little bit about me. No, let's not talk about me. Let's go ahead and move on instead. So let me ask a question. What is a circular economy? Does anybody know what that is? My choice. Exactly, right. And you gave some good examples of the economy, circular economy. But yes, ultimately, as you would know since you're part of endless, circular economy is about sustainability. It's about carbon footprints. It's also about cats. Okay, no. Maybe it's not about cats, sorry. Sorry, cat. No, there's nothing like that. So, but here's some good examples of circular economy. So if you're the one, one everybody knows about is plastic bottles turning into something else, whether, you know, there's some actually nice shoes, women's shoes that if you turn from there, that's become really popular as a brand, shopping at thrift stores, secondhand stores, all those kinds of things are another great example of a circular economy. And finally, things that take raw materials that you find in the trash or something else and turn into something else. So, you know, wood or a piece of driftwood lying on the floor, not on the floor, but on a sand beach or something like that is another great thing. I saw a YouTube video of some guy who found something in these huge iron, you know, like from a shipyard or something, right? And he turned it into a sword, a Japanese sword. So those are like really wonderful examples of how you could take something, turn it into something valuable and resell it. But to do it, put a more formal definition, it's really just taking raw materials and turning it into something else, right? As I just said before. And to kind of put it in scope, you don't want this, you don't want a place where you dump all your stuff, you really want to turn, you really wanna do this, where you're doing reuse. And that's the most important part of the circular economy as you were saying, is to take something and put it back and reuse them as much as possible because we don't want waste. Every time we throw something away, we're actually paying a carbon cost. So how does that relate to data centers? Circular economy and data centers. Well, there's not much going on on circular economy and data centers. There's actually only one primary company that is really looking at circular economy and data centers. So we actually have a lot of work to do in this space because there's no players, there's really only one and we've got work to do. And we've got to have angry cat feeling here, right? We've got stuff to do. And I think it's important. One of the things I wanted to, why I wanted to give this talk is to give that feeling of we have a duty to reuse as much as possible and that there's actually tangible benefits of using the circular economy, which you'll find later in this talk. And why it's imperative is, I think if you've wandered around in the talks in this conference, you can understand that we have a hunger for data. We want to collect data. We want to be at Facebook. We have social media. We have cloud. We have on-prem clouds. We have off-prem clouds. We need and we hunger for data. We're really interested because of that where the demand for compute is high. So we're buying hyper-scale class machines. We're having exponential growth in the data centers around the world. We can't get enough. Now, what do you think happens after that four-year depreciation? They go into a dumpster or the landfill or something. What an incredible amount of waste, right? And so if you apply the circular economy to the data center, we're actually promoting reuse as much as possible instead of putting them in the landfill. And that reuse actually promotes low-carbon footprint because we're actually deferring the carbon cost of manufacturing for all of them. So it's important. And if we don't, we want to defer as much as that is possible and make them useful for many lifetimes. Lowering that carbon means that it's great for our planet. We are addressing climate change because we're not manufacturing so much with that reuse. So it's good for humans. It's good for our nature and ecosystems. It's great for our planet. So I told you there was one player involved and that's IT Renew. And this chart here kind of shows how that circular economy works. So IT Renew, what goes on there is they take supply and that's Meta or Facebook, Dropbox, Google, all those things get sent to IT Renew which decomposes those components, they put them into their individual components. Once that happens, they get re-certified. Sometimes they get sold back to the hyperscalers but then there's this other interesting idea that came about and that why can't we build servers ourselves instead of just selling it on the open market? And that's what was a cool idea about that, right? Because you can take those servers, re-certify them and then sell them to second tier markets. Now what are those second tier markets? They're gaming, they're banks, they're, well I said financial institutions, whatever it is, right? Universities, there's so many governments, there's so many places and we're giving these servers a second life and possibly even a third life going on. So, and these servers are not some low tier, 20 year old servers that one would think, they're the highest you can get, the best you can get at times six years ago and now we can reuse them and send them out. The other great thing about it is uninterruptible supply of IT components. You know, this pandemic has taught us that supply chains are fragile, extremely fragile and this is what is driving the cost of everything in the past two years. Now imagine having a supply chain that's local. So, IT renew has localized warehouses across different continents. So you're not shipping them overseas or anything, they're actually in Europe, they're actually in the US, they're in various other places and so we're taking almost over 21,000 servers a month that gets incorporated and turned them into components. So that's pretty substantial supply to be used, right? And think of the other, so you're getting them all, they're all localized. And I think that's a pretty powerful argument when, especially if you're a government, right? If you're getting it from your local thing then you're not worried about any security issues because you're getting them locally. Okay, so I talked a little bit about the hardware supply. What else is interesting about it? So I think if you're going to rebuild and recertify servers, I think we have to change the paradigm and how we do that. Sure, we have existing standards, especially for racks or things like that, but we could do more, we could be more open. We could be doing more collaboration in this space. In fact, that collaboration is already happening. So we have standards, we need specifications and most of all, we need a community. I mean, I need cats or maybe, maybe not. Maybe you don't need cats. What we need and roll the drums is the Open Compute Project. The Open Compute Project was actually started by Facebook and it's a foundation dedicated to working together and building an open compute model across many parts. So in this way, the circular economy is joining a community of people who already are interested in an open specified data center. And I don't know how well you can see all these logos, but these are all people who are cooperating together. You have R, you have the hardware manufacturer, the ODMs, you have Facebook, you had the hyperscalers involved. There's, and you have folks that are interested in networking, open networking. There is a wide variety of people working together in each of these parts. So it's a pretty amazing place. I've spent some time, I spent two years working with the Open Compute Project and there's a lot of great things happening there. So here's the kind of benefits that OCP provides, right? You see things like high flexibility. You know, one of the things we did at ITVNU was different form factors, right? You could, we're actually all collaboratively designed together with different form factors that you can put, that you can address. Let me give you a couple of examples. Edge computing, right? You don't need huge servers, you actually need small ones. And there was this notion that you can put a data center anywhere, especially if you're small form factors, you can put them in underserved communities. You can use the output of those servers to heat homes. In fact, there's all these great things that we're talking about, like how do you reuse even the data center parts? The cooling, the exhaust, these are the kind of wonderful things that are happening in this data center space that most people don't know. The collaboration in this project is extremely impactful. We had one, I think there was one where we were taking the output of data center and powering greenhouses. And so they were, they were making marijuana. But anyway, they were a medical thing, but they went there and they used that. And it's like the output of this gave some kind of organic matter that it could be used, right? The same thing could be used for cooling or heating homes or into winter. So these are the kind of problems we're trying to tackle. And additional, where we're looking at how do we improve cooling? So a lot of things we're doing in the open computer is how do we make dense racks, right? So once you're doing dense racks, then you're like, okay, how do we deal with the cooling issues? How do we do the heating issues? So all of these are collaboratively done here that we're all a part of that we can rely on. So the engineering is not all done on one company, but it's geared across, it's spread across all of them. Now you're wondering what makes all these companies work together like that, right? When there are competitors, it's because everybody's solving the same problem by themselves. What is the point of that? We're wasting engineering space, but we can't collaborate on the common problems together and then our competitors can distinguish themselves on top of that bedrock, right? And that's why they work together and that's why this project is so important is precisely for that kind of thing. Okay, so let's talk about the monolink story, right? I don't know if this thing is gonna play. I've been playing around, I'm sorry. Okay, so what is the monolink? It's a platform that was designed by Facebook and contributed to the Open Compute Project in 2015. And it was designed for Facebook data centers for hyperscale workloads, right? And it's a pretty interesting platform. I will add, this is not the only platform Facebook has contributed. They've contributed several, at least four or five, and the latest one is Delta Lake, which was just recently contributed in November of last year. So what is it? So the monolink is actually a combination, right? There's something called Yosemite V1 and this is what you're seeing here. This is the enclosure, the design of the enclosure. There are four slots in the chassis and each of those four slots contains system on the chip. And so each of them is four of them, right? And each of them is a Xeon D1500 series processor with up to 32 cores and it comes with a BMC chip. The BMC chip actually talks to all four processors at the same time and then each one of them has 128 gigs of memory. So with all of this, you're looking at one card, about 528 cores and 500 gigabytes of memory. That's a lot of firepower and that's just one. And you can see a picture of that, what it looks like there. So imagine what 48 of them are. And 48 in a full rack, that's 192 discrete CPUs. You use not including the cores, right? And I think, I don't know about you all, but I'm pretty excited about what you can do with that. Now, I mean for a hyperscaler, for a hyperscaler, this is nothing. But if you're a second tier market, you should be drooling right now because you are, this is an incredible amount that you can get on one floor type. And the best part is, it's designed to be open from the beginning. That's what's great to be about an engineer. Thank you, thank you, thank you very much. I am an engineer, everybody. Okay, so anyway, going back to my point, it is designed to be open from the very beginning. And when this collaboration was actually between Facebook and Intel, and of course many others, and they wanted to build a hyperscaler platform, and the collaboration is great because I think like for Intel is sort of the gatekeeper here in terms of a lot of the firmware and things like that. So to have Intel come to the table and work together on doing this kind of thing, it's sort of pretty awesome. Imagine all these, how much we struggled for in the past trying to have find open hardware. Of course, Raspberry Pi is a great example of a basic thing, but when you compare it to a Mono Lake, it's pretty small as a computational unit. And it's great for a certain class of, to solve certain class of problems. But if you're trying to build large, wide scalable solutions, then this is a great thing. And in the link here, you can see, if you're interested in seeing what the specifications are for the Mono Lake, I have it here, and I'm happy to supply that to you. It's also available on their website. Okay, okay, sorry. Where are we? But so far, I've told you nothing, right? What is the story? I think, I wish there was a way. So I'm a poor engineer now. Okay, all right, we're back. And now a much simpler device, better than. Anyway, I'm not being specific. So the reason this slide is here is, there is a story from before. We're just breaking it down, because a lot of times it's about context. How do you know what's great? So if I didn't explain what the circular economy was, then you wouldn't know the context of the story or why it is so important or anything else. If I didn't talk about the open compute project, you couldn't really understand how the work required to put this together. So moving on, and I need my glasses because I can't see my slides. All right, so we got the specifications. We've got the hardware. We're ready to put some servers down. What's the problem? Well, okay, remember when I said this hardware is old? It's five years, four to five years old already. So what happens to old hardware? I'm sorry? Yeah, the other thing is nobody wants to support it. Nobody cares. The lights have been turned off. They've moved on to the new thing. The salespeople are like, oh, forget about that one. We got this really brand new one. You don't need 12 cores. Here's one for 34 cores. You don't need all that. But we're just a dental office. No. So the problem is is that nobody wants to do support these new things, right? And that gets problematic. So if you're trying to sell somebody a server and it's running old stuff and they ask you, well, what's the software updates? There's no software updates. What about security? Oh, there's not that much security updates. So that's a big drawback. And so if your bias is old, everything is old. So how do you fix that? How do you fix making these old hardware relevant going forward? You moved to an open source tool chain. So that's the best thing, right? What's so great about the open source tool chain? It's community supported and we can support it as long as somebody's interested in using those platforms, you can create an entire ecosystems just supporting that. And that's what's great. Once you get that going, we can support these servers in perpetuity. Now some of the ODMs may not be happy because they want to sell you new hardware, but there are some who are always going to be cash trapped. Government, schools, all of them, they're like, we can. And a lot of times there's an upper bound. I'm never gonna need more than this, right? And if you're doing 21,000 servers, there is a lot out there. If something's broken, you can just replace them. And it's pretty cheap. And there are a fraction of the price. So I think that's a great part. And circularity, we need that openness, right? Otherwise this model doesn't work. If we don't, can't rely on open source and open specifications, this model does not really work to sell servers. So a lot of this is laying that groundwork to build sustainable platforms. Okay, so this part is really the work required to replace a tool chain. Looks kind of simple. Oh yeah, we'll just put core boot on there. It boots up Linux boot, and then victory. It's not that simple. In fact, the engineering required to do this is extensive, extensive. So it is not, and I think others, even for the newer platforms, there are so many pieces that has to align to make things work. It's actually easier for later models, but for models like the monolake or the leopard or some of these others, it is a pretty heavy lift to get them up to modern standards. But that's what I have to do to rely on it. So to enable core boot, you need an FSB, which is the firmware support package that comes from Intel. And it's the binary, we cannot avoid binary blobs. If you're using the x86 platform, and I think even ARM to some extent, you have, there is a binary blob in there somewhere. And what is this binary blob? It's pretty much expresses the Intel platform and all its capabilities. The second thing is there's a BMC chip on there. Now there's a project called Open BMC. And there is actually two Open BMC projects. And one is run by Facebook, the other one by the Linux Foundation. And then finally, the kernel needs to recognize the hardware and actually boot. So that's the next stage of engineering to get this forward. So a lot of engineering, right? So we actually had to hire a contractor and CISPR consulting to help us put this endeavor. And the great thing about them is they actually have a license to hack the FSB, which is few and far between. I don't think Intel is that forthright in giving license out to hack that. I think a lot of it is plausible deniability if you hack it. Very not, they don't want to support anything that's hacked. And the other thing here is that where the FSB is. So the monolake is actually FSB 1.0. And that is only supported by an older version of the core boot. So that's not a bit of engineering. So 1.5 years go by of working. And so we saw the core boot plus FSB problem. And it was hard. A lot of tacos were involved. We forward ported the changes to the current core boot branch. So we took all the stuff that was in the older core boot branch and then we forward ported them into the current one and upstreamed all that. And it was hard. Involved a lot of tacos and ice cream and martinis. Then finally, the kernel changes to the Yachto kernel, which I think is 4.0. It's the funniest thing about upstreaming changes to the kernel. A lot of companies do not want to directly contribute to the kernel. They're out to do the Yachto kernel because to get anything into the kernel, the mainstream kernel requires at least 1.5 years to get in there because the first time you kind of contribute, you know you're gonna screw up. Then the second time, there's gonna be another set of things. The third time, finally you're getting around to shipping it and then at last it goes in. Instead, you can go to the Yachto kernel which has much more relaxed ways to get in there. So a lot of times we end up using the Yachto kernel for embedded and various other things. And also more tacos. So what was the end result of all that engineering? So the FSB now supports eight cores, oh no I'm sorry, it initially supported eight cores and 16 threads. And then we doubled the resources and made them 16 cores and 32 threads. And so we made the FSB a lot more useful. So that's a pretty good tangible benefit. It still needs more work, but that's one improvement. So this is something the circular economy has helped enable. We had to go and get Linux boot to be able to support features that customers actually want, otherwise it's just a toy. So if you don't have Iapixie, then that's not really useful on a hyperscale platform. If you can't easily put operating systems on it. Let's see, what else do we got here? And then we had to support all the file systems on the storage, so if you can't get accession three or four or butter FS, not very useful because then you can't boot an OS on it. And then finally we did actually manage to get booted to Ubuntu and that's, and so that whole thing took about 1.5 years to get into a shape. And this is our family dog. He's celebrating, he's happy, we're all happy, but I can't underscore how much work this was. And it's not just the engineering work. It really required looking through the documentation on the platform, going through the FSP documentation that Intel provides, understanding all the data structures inside there. How do you expose, what data structures to expose to enable those feature X and feature Y, schematics, debugging tools, all the flashing software. Sometimes you screw up the thing and turn it into a brick and then you gotta figure out how to get it back off again. Then there is a kernel code, you gotta look at the commit history. So everything, there's an incredible amount of work and you gotta keep going from one step after another. And so in the end, we did get something and that we submitted to the open system firmware team at OCP and so it was accepted into their repository in January 2022. And so we have a working system of sorts, but it's only one, it's only one chapter one because there's still a lot of work to do. And so what's next? Well, it doesn't work with all distros. You got Ubuntu working, it doesn't work with free, like the BSDs, it doesn't work with Windows. So if you're following the classic core boot UFI and Windows, and maybe you don't care about Windows so much, but we do want to get all the other distros working like RedRail and all that stuff. So we're gonna need a lot of help with the community going forward, but there's a glitch. So, IGVNU was actually acquired by another company. And the parent company was not really geared to selling products like we were doing. And they were more of a services company. So unfortunately they had to dissolve the Sesame Team that was doing the work. But I think one of the great things, and this is where the resilience of having open source and open specifications is that, A, first we prove that this business model actually works. We have generated millions of dollars in profits and actually selling these hardware and doing it. So this is a concept that can work. And we have proven that it works. And so that's something we're really proud of. So a good idea never dies. And because these are assets that are freely under free licenses, we took everything, we extracted everything we did internally, put them on a GitHub, we put it on Notion.io. Nothing is lost. This endeavor continues to stay in the open. And that's great because even the customers who may have bought purchase honestly, they have access to all the information they need. So they can still go on self support if they wanted to or they can hire the former team I was working on. In fact, my boss is doing another startup precisely. The way Sesame was created was actually as a client to IT Renews. So IT Renews core business continues, but we were tapping into their supply chain to do our thing. So the only difference between what this new company does and what Sesame was doing was, Sesame was part of the company. But there's nothing that says that you can't keep doing that just without that formality. So I think we're still in the game. We're still gonna continue working on things like the Mono Lake and your 72 and all these other things. It just may take a little bit of time before we get our investors back in shape and being able to get back and do things. And besides, we have a plan to save, you know? We got carbon to save. We've got to stop moving these things into a landfill. So, you know, it's imperative that we do that. And there's so many other projects that this can key off, right? Again, going back to those form factors, going into underserved communities, building data centers that can be functionally part of communities, because one of the interesting challenges, I'm sorry, I'm going out on a tangent, is that if you're trying to help an underserved community, a lot of times, if you're doing investments, you might trigger something where people are pushed out. Instead of helping them, then you're starting kicking them out, right? So, but a data center doesn't do that. It just stays there and it doesn't attract things, right? So, that's the best part. So anyway, going back to the Mono Lake after my thing, we actually are setting up, OCP is setting up a lab that you'll be able to play with them. If you wanted to have an access to a Mono Lake, we are actually contributing Mono Lakes, Leopards, Wedge 100s, which is another thing that comes from Facebook for open networking. There is a whole slew of things, Delta Lakes. So, you know, what's exciting is that it's happening. Open infrastructure is happening. And I think that is going to be, that is exciting to me, because we can now start leveraging, building things more open. And if we start combining the open source ports of hardware and with the open source, we can actually, we're really building a whole new family or direction of openness with specifications. A lot of times we talk about our values in open source from a software perspective, but we never think about the hardware, right? A lot of times I've put so much talks at the KubeCon saying, okay, you know, you're using general, killing all those carbon stuff. And every time I say, well, it's Kubernetes, we don't care about the hardware, but you do have to care about the hardware because the hardware is generating carbon. And you can't sit there with your, whether it's Google Cloud or whatever it is, you should actually think about your values and think are we using that hardware properly? And so the latest, can we get a circular economy? So, and now we're actually getting circular economy that could meet people's needs, at least even from a hyperscaler perspective. So that's something I'm really, really, really, really passionate about. So when I've come to the end of my talk, I have decided to show you all the pictures of my cats because, and there was, you did get one dog, but I didn't get a chance to get a dog in there and ahead of time. But thank you for spending 45 minutes with me or the talk is one hour, but I hope you enjoyed listening and maybe feel hopefully, you feel impassioned to help out and open hardware. And this is me. And if you're interested in that startup, or if you wanna know more about open hardware, I'm more than happy to connect, provide information, anything else. I will say, you need to speak up. I have some hearing issues, so if I don't hear you, please don't be offended or make a repeat. So, thank you. See you, sir. I have two questions. Okay, one more time. Do you mind if I pick up where you call? Yes, yes, there's a GitHub repo. There's one that's part of the Open Compute Project, but we have another one that's also part of Sesame Engineering that's out there, yeah. So there's two of them. That one has other documentation, things like that. There's also Notion.io, but that's not quite public yet, but there are entire knowledge bases on that one, so. Yeah, I think the problem is that I'm restricted to whatever everybody else is using, and I think that's why. Like, I think originally, whoever was coming from Facebook was using that. That's my recollection, but don't hold me to it. But yes, if you could, I agree with you there. Yes, yes, I do, I do. So, at the moment, we bought up, so the one I have, let me see if I can show you what it looks like. It's called a discovery. This hardware, this form factor that you can fit under your desk, it actually will fit four 1U systems. So I actually have two leopards and three leopards and one monolite in one thing sitting there. They're really, really noisy. So I don't actually put the monolite in because I can't hear anything, but the leopards are great. So I could show you afterwards. I don't know if anybody else wants to, but yeah. To acquire them, my boss purchased all the inventory that he could get his hands on prior to leaving, so I can connect you if you're interested in purchasing and things like that. We were selling them for with leopards from around between 9K to 12K. So essentially you're getting an entire data center and under your desk because it does everything. It's got a VMC chip. You can take it from remote. I can access my machine anywhere thanks to Tailwind VPN, which is amazing. I actually could, I just installed all of them and I can access all my machines wherever. It's fabulous, so I enjoy that. I don't think he's come up with a name just yet. He's still busy talking to investors, but like I said, I can connect you to him if you're interested in that. Okay, great. So I think this is the sort of thing that has to go through the Open Compute Project to work with Intel rather than like I myself, like one company cannot do it, you know, like it's a guy, but it works better if a community comes in and works together with somebody like Intel or whatever it is, right, to do that. So I don't have any answers there. However, I, as of three and a half weeks ago, I am an Intel employee, which means that I have, I have some ability to go internally and figure out if there are solutions that we could do. But, you know, ultimately it's up to the business groups that involved to make decisions. So. Okay, then you're stuck with things that are enough. That you're gonna to ensure we're forward. Yeah, you want to be more of a developer. Yeah, you want to be more of a developer. And Intel, as far as I know, doesn't publish things, most of these things. Or they don't publicly publish things. You're saying things, most of these things. Yeah. So that's another issue where you want to be more of a guy. Yeah, I'll use it only from the VM, but instead of flowing, sometimes I think that it's just a microphone, but sometimes it's just like, yeah, you can be, it's just a normal guy. Any other questions? All right, fantastic. Thanks you all for coming. And if anybody's interested, and if anybody's interested in seeing what my machine looks like, I'm more than happy to show you some photos, especially the ones we've designed, the enclosure and things like that. So there's some YouTube ones too that we did that you can look at. There's some swag there. It's always, I remember some of the folks that basically, we feel very frustrated with being one of the top, and we'll see if he's going to spend too much time or if they remember, if they don't go out or advertise. Like, how do you guys see that from the start? Yeah. I'm sorry, but I know everyone out here in the community, actually talking about it. Even though so that's far, even if you have a microphone, so I'll put it here. Yeah, no, I just... Hot mic, wait, wait, wait, wait, wait. Go.