 Morning everybody. My name is Davide. I'm a production engineer at Facebook, and this talk is about the work we do with Stream. Here's the agenda for today. We'll start with a quick introduction about our infrastructure. We'll dive into what we do specifically with Sendos and how we use it. We'll talk about how we contribute to upstream and how you can contribute to upstream leveraging Sendos Stream as well. And finally, we close with some words around deployment and management. So let's go over infrastructure. Facebook has a lot of machines. We have millions of servers across the globe in multiple data centers. All of these machines are physical machines. All of these machines run Sentos. And that has been through for many years, as far as I know, since the company was started. And the common theme throughout this is that we try to manage this fleet in a crazy way, and we try to leverage open source software as much as we can. I work on the operating systems team. He manages the bare metal experience of the fleet. We treat the operating system as a platform in the sense that we try to build a common layer that other teams that the company can leverage directly. Teams that run directly on bare metal will interface directly with the US. Teams that run on the container platform will interface with the container platform, which itself runs on top of Sentos. And the containers, by the way, also run Sentos. Teams at Facebook are expected to be responsible for their own machines and their own services. So our team often acts as kind of a consulting partner where we will build tooling and try to set up systems in a way that can be useful and solve problems. And then it's up to the individual owners of, say, MySQL or the web servers or whatever to make sure that their services work properly, they're monitored and so on. A constant theme throughout this is that we try to build our infrastructure on top of an open source foundation. There is no point in reinventing the wheel. And we very much want to try and leverage the work that the community has done and contribute to the community as well as much as we can. So we use Linux, obviously, we use Sentos. We use the standard packaging stack a lot. We use Chef for config management and we use system D throughout the infrastructure. And I'll talk about all of these components more closely in a little bit. But first I want to say a few words about why we actually do this. I had these lies in my talks for quite a while now, but I think it's important to go over these in general. We try as much as possible to work with open source because we think the community is where the innovation tends to happen and the community is what ends up setting the direction. If we can work with the community on features, try to make our use case understood, try to better understand what the community wants. We can make things better for Facebook, but we can also make things better for the world at large. And while Facebook prises itself on moving fast, the reality is that community often moves even faster. It is quite common for us to find out that we have a problem. We start analyzing it and trying to figure out what we should do, and then we find out that it's already been solved by the community at large and we can either import the solution as is or we can make minor changes, contribute back and solve the problem for everybody. We don't need to write anything ourselves. And at the same time, the fact that we can leverage the community's work means also that we can share our work. We can share our work, we can share our code, we can share the maintenance in, we can have other people contribute to it. And from an engineering standpoint, at least for me, it is much more pleasant to know that I am working on something that other folks can actually leverage and use and maybe make the life a little bit better. There's a token I gave in a DEF CON theme in 2017, that was a while ago. It's still fairly relevant though, so if you're interested more into our approach here, you are going to watch it. I left talks referenced throughout this presentation because this is only a 40 minutes long, so I'll try to not go into much details in things that have already been covered in the past. Now let's talk about centers. But first, before we talk about centers, let's talk about why we actually run centers. As I said, we've been running centers for quite a while. We've been at Facebook since 2012, and at the time we were running Cent5, and now we're running CentOS 3Mate. There's been some constant themes throughout this. CentOS gives us a lot of things that we like, and it also gives us the ability to fix the things we don't necessarily like or that don't necessarily apply to our environment. Well, first thing first, CentOS gives us stable releases. It gives us a stable base that we can build upon, that is known to work, that is known to be well tested and gated. Note that this is true both for CentOS Linux and for CentOS 3M, because they have effectively the same gating. So from that point of view, nothing has actually changed. CentOS also gives us binary compatibility. Binary compatibility is very important because binary compatibility means that if you go on a live system and you run DNF upgrade, you can be certain to a very high degree that the services you are currently running on the machine will keep running on the machine, that you won't need to reboot, that you won't need to kill random things for stuff to keep working because GNPC suddenly changed the way it works. This is what makes it possible for us to do minor updates on the fleet live throughout the lifecycle of the system by just sharpening them out. CentOS also obviously gives us security updates, which is something else that is clearly very important, but it's also something that ties in with the binary compatibility because without that we would not be able to live this live in a safe way. CentOS gives us good tooling and tooling that is very well understood. Things like the packaging stack at this point are very mature. There are tools that have been around for a long time. There are tools that people know how to work with, know how to contribute with and make changes if needed. And finally, CentOS is part of a larger ecosystem. CentOS itself is a fairly conservative distribution, that it doesn't carry that many packages, and the same is for REL because it's effectively the same kind of packages, but it doesn't carry that much beyond core system packages. But because CentOS is part of the Fedora ecosystem, we also have access to everything that is packaged in a PEL, and we also have access to everything that is packaged in Fedora. And if something is missing from a PEL or Fedora, we can work with those communities to make it available. And that is something that has been extremely valuable throughout the years. So the approach we've generally taken at Facebook is that we run a CentOS base, and then we backport on top of it what we end up needing from Fedora or Hype. So these are generally things that are system-bombing. Either packages that we work on very closely with upstream, so we want to know that we are running the latest master or the latest stable release so that if we make any changes we can contribute them back without having to worry about backporting. Or if we use system and low-level packages that we work on. We try to publish our backports on GitHub and I'll talk more about this later and how we're trying to make this easier to consume. The way this time to be written is that we have a macro to get Facebook-specific stuff and configuration, so the packages themselves can also be useful to people that are not Facebook. We found throughout the years that the approach of combining CentOS with this, what we call this fasting layer, this layer of packages that we backport, this is a stable distribution that can move as fast as we need and at the pace we need in the various components. The other thing we do that deviates from CentOS proper is that we don't run the CentOS kernel. The CentOS kernel is a very stable kernel, but it tends to be based on a fairly old kernel version compared to what is currently in Linux 3. If those are kernel that tends to have a lot of backports, because our kernel team does a lot of development on many features and subsystems, and they do all of this development in master, it's a lot easier for us to just run effectively Linux history or something close to Linux history rather than doing work to then backport these to an older kernel. And we do a lot of kernel work. We do a lot of kernel work both on specific features and subsystems and general fixes. Although this is work that doesn't really happen internally at Facebook, it's all work that happens outside in the open. And while there is an internal development branch of the kernel, it's effectively used mostly for staging patches and make it easier to roll changes. The way it works in practice is that we tend to have two kernels concurrently in the fleet, one that's stable and one that is development, and then we slowly update them and move them on, but they tend to be very close to mainline. Some examples of features that we worked on throughout the years that we continue to work on are the ButterFS file system. The resource control features leverage in SIGIRV2. There's been a lot of work on resource control lately, and resource control also ties with ButterFS because ButterFS is what actually makes it possible to have reliable IO control. Within resource control, there's also been a lot of work later on PSI, which is a feature in the kernel that effectively allows you to predict the future and tell whether a process is likely to oom in the near term, so you can do something about it before the kernel invokes the oom killer and kills it. And finally, we do a lot of work on BPF and the BPF toolset. This is just a short rundown. There's a lot more stuff happening in the kernel. We have tooling that we use to roll out kernels and test them and make sure this is done safely. These days, it's actually the same tooling we use for the operating system, and I'll talk about this later. There's an old blog post I linked there if you're interested in learning more about the kernel. And other components that I mentioned before that we work on is SystemD. As I said, we do a lot of work on SystemD and with SystemD. So we try to follow the SystemD upstream, and we maintain a SystemD back port that's available in the 3-byte mansion. The way we do work with SystemD is that we'll take the data stable release, we'll add whatever patches we have in development that either have been already submitted upstream and accepted, or that are currently in review. And then we feed it to our CI-CD pipeline, which will run it through a battery of tests. It will also run it against tests against our container suite. It tends to stress things quite a bit. And then if we find any issues, we report them back upstream or fix them directly. And then we just roll it out. We've started using SystemD at Facebook when we were doing the Sense7 migration, and this went from a handful of people that were doing work with this and the rest of the company being fairly skeptical to almost everybody effectively embracing SystemD and wanting to leverage its features and work with it. And we've done a lot of feature development as well throughout the years. I'm not going to talk about these in-depth, because I've given a lot of talks about this before, and I'll link one below there. One thing I do want to highlight though is SystemD UMDI, because it's a fairly recent change, and it's also something that's coming in Fedora for 34. SystemD UMDI is a new kind of user space. If you're a human killer, the leverage is PSI. So the way SystemD UMDI works is that it figures out why your system is running, if there's a process that's about to spill over and you invoke the human killer, and it deals with it before that actually happens. So it can act in a much more precise way and it avoids bringing down your whole system. This is something we are running in production at Facebook now, and it's something that is already merged in SystemD upstream for two for six, I believe, two for five. Yeah, we also have a few other projects that are tangential to SystemD. Back in Descent, when we started doing SystemD, we've got some Kampa libraries to make it possible to run a modern SystemD, when we're doing System7, sorry. We've got Kampa libraries to make it possible to run a modern SystemD on the distribution, because the distribution was still using an older version. This is not terribly relevant these days, and it's something with probably Sunset with Nine. We also have a project called PythonD that gives you a nice Python abstraction on top of SystemD, and because you use this Python, it links directly to SystemD, and it's very fast and very useful, especially if you're into do operations over a debus. This is also packaged in both Skid or NFL, if you want to use it. Finally, a few words about packaging. As I said, we use a standard packaging stack. We use RPM and DNF and YAM. Because we've been doing this for a while, we found pretty much anything that can go wrong in the packaging stack at this point, and I dealt with it at some point. In general, when you're operating a fleet as large as Facebook, anything that can go wrong will go wrong, and it will go wrong a lot. Even issues that normally would manifest themselves on 0.1% of your machine or in your situation, they end up becoming a lot of machines that you have to deal with. With the RPMDB specifically, this was evident when we were still using Berkeley DB, because Berkeley DB was fairly brittle and fairly easy to mess up, and we've wrote a tool set called DCRPM to deal with the situation to identify the state of the RPMDB and re-mediate corruption and do a bunch of housekeeping around it to try and make things better. This is also open source, and it's also packaged in Fedora, which is one of the main issues today. The other thing we did was work with the community. The community was already aware of this problem and had developed some alternative RPM databases to test. At the time, there was LMDB and NDB, so we took those two, we AD tested them by effectively putting them on a lot of machines and comparing results, and we ended up switching to NDB from Berkeley DB, and this all but eliminated RPMDB corruption as far as we can tell, so this day on the fleet at Facebook, everywhere. There's also some work in progress happening in the space. The main thing I want to mention is my colleague Matthew Almo's work on Levered, Geodef, and RPM Copy on Brite. The idea here is to use the features that are provided by modern file systems like ButterFS to make packaging stores much faster, much faster and much less expensive. Matthew gave a talk at Dojo 2021, not 2020, two weeks ago, and I linked it there and it goes indeed along how this works. This has also been proposed as a change proposal for 34, although we will punt it to 35 because there's more work needed there. But this is something we're very excited about and it's something we are currently also running in production and has provided a lot of performance improvements. Finally, I mentioned the RPMDB before. The community already moved on and now it's using SQL Live, especially in Fedora for the RPMDB. So we plan to start evaluating that shortly and assuming it works just as well as NDB, which I expect will switch over there so we can stay closer and not keep a delta if we can avoid it. Okay, there's a few more things that we do that tend to not be what you would normally expect if you run bare Sentos. And these are all things we do to either better fit in our environment or better fit with our workload. So for example, if you install Sent 8 today, it comes with SIGURP 1 by default and it will install on XFS. Because we do a lot of work on resource control and when I leverage these features, by default we will set up machines with SIGURP 2 and we will use better FS from the root file system. SIGURP 2 and better FS combined is important because it's what keeps you working IO control. And that is something we use a lot at Facebook all over the place. We started making better FS the default with Sentos 8 when we started switching over and it has been our resounding success so far. We were using better FS long before that, of course, but it wasn't the default for the root file system. We also had a few other minor changes in the infrastructure. In Sent 8 specifically, IP table ships without the NF table, without the legacy backend, only with NF tables. But our kernel folks don't really want to support NF tables for a bunch of reasons, so we rebuilt it with the legacy backend enabled. On the networking front, historically, we've always used network scripts instead of network manager. Network scripts package as part of image scripts if you're not familiar. It's what used to be the default that we used earlier, I believe. So that's what we still use. This is likely something we'll re-evaluate when we start working on 9. What I suspect we'll end up doing is using system-de-network B, but we haven't actually done any evaluation on this yet, so we'll see. Again, there's a link to another talk if you want more details about these specific things and why we would make the choices we make. Okay, I went fairly fast on that. And I went fairly fast on that on purpose because the part I want to focus is actually the part about upstream and upstream contributions, because I think that's the interesting and the important bit at this point, the bit that, when people have talked about it, I don't think it's necessarily well understood in general. So to talk about this, let's take a step back and think about how the distribution is built. And of course, there's diagrams, which hopefully will make sense. So I'll go over examples of how the distribution is built in Windows 7. So the way CentOS and to be clear, I don't work at Red Hat, I've never worked at Red Hat. This is based on public information and my understanding of things. So if something is blatantly wrong, please mention it in the chat and we can talk about it later. So the way CentOS 7 was built is that Red Hat takes Fedora 19, they took Fedora 19 when it was released and they took a snapshot of it. They revealed the whole distribution with a bunch of special macros so that it comes out as EL instead of Fedora. And then they start actually testing the distribution and working on it. All of this happens inside Red Hat in effectively a staging distribution that is in public that ensures a name for the people that work on it. When the staging distribution evolves from like a primordial soup to an actual distribution that people can use, this becomes something that has testing and gating applied. So when updates are added to it, the updates go through a testing and qualification phase before they then make it out. Then this distribution is what gets shipped as Red Hat 7. When a Red Hat 7.0 comes out, CentOS Linux 7.0 is built and CentOS Linux 7.0 is built by taking the sources of Red Hat 7 that come out publicly and rebuilding them. It has no input whatsoever from what happened on the stage previous. And then throughout the lifetime of 7, this goes on. So 7.1 also comes out of the staging distribution, which at that point is effectively the same as Red Hat and it's a stable distribution that gets updates. These updates are gated and tested and then once a bunch of these have been modeled up, it's released together as Red Hat 7.1 and then CentOS Linux 7.1 and so on. I forgot how many point releases 7 ads, so I'll just put the ads there. Now, if you look at this from a birth idea, let's say you want to contribute your CentOS user and you want to contribute to CentOS Linux because, say, you found a bug in 7.0 in CentOS Linux. What can you do? Well, let's see. You could contribute to CentOS Linux directly, but not really because CentOS Linux is just a rebuild of RHEL. So there is no, apart from logic to do the actual build and some binding changes, there isn't anything in CentOS Linux that isn't in RHEL, so you can't really change anything in CentOS Linux because by definition this doesn't have any deviations from RHEL. So, okay, I guess you could contribute to RHEL, but not really because RHEL is a commercial product. You can certainly file bug reports on RHEL and if you file bug reports and attach patches, those might end up to the right person and they might eventually make it to a stable point release on the road. We actually tried doing this back in the day at Facebook. It took us many months to get a one-line fix merged and this was via pinging people we knew when asking for favors and stuff like that. This is clearly not a viable option and even if you were on RHEL as a commercial customer and as a support contractor and everything, that's mostly geared towards support, not towards contributing changes and trying to do that. So, you're kind of stuck because obviously you can't contribute to the staging because that's internal. The only component you can contribute to in this world is effectively only Fedora. And while you can certainly contribute to Fedora, that's not really going to help you if you're running CentOS Linux 7 now. That may help you if you plan to move to CentOS Linux 8 or 9 down the road because changes in Fedora eventually trickle down to new major releases so you end up doing what we ended up doing and a lot of people do, which is where you end up maintaining a lot of logic internally and backwards and then you try to talk to people and get things fixed via batch channels. This wasn't an ideal situation and this is something everybody was well aware of and this is where stream came in. So let's look at what the picture is with 8. It is actually very similar. There's two changes you can see here. Well, it's Fedora 28 because the world moved on. But also effectively the staging distribution isn't there anymore and instead of that we have CentOS 3Mate. And CentOS 3Mate is public now. It is an actual distribution that you can run, it's an actual distribution you can contribute to. And this is a major change because now, well, you still cannot contribute directly to CentOS Linux because obviously it's already built up already, you can contribute to stream. You can try to get stuff fixed there and it will make it into stream assuming it passes the testing and negating and the maintainers of course thinks the change is not insane. And then from there it will eventually make it to REL and CentOS. So this is a major change because now you suddenly can actually contribute to the distribution and I want to stress this doesn't mean the CentOS 3Mate is all high for CentOS because it's effectively the same as before with the staging distribution. This is effectively REL except it's not REL 8.0, 8.1, 8. whatever. It is REL with all the updates up to this point where when new code is added or when new packages are added are packages that have gone through all the gating and testing that they would have gone if they were shipped to customers in REL. So it's effectively something that is as stable as REL. Now the other difference in this obviously unless you've been living under a rock is that CentOS Linux 8 has been effectively sunset. So at one point there will be REL 8.1 that won't have a corresponding CentOS Linux but CentOS 3M will always be around. Now this is a much better situation because now you can contribute to Fedora if you want to make changes to the new version of REL and CentOS but you can also contribute to CentOS 3Mate now if you want to fix things right now in it and eventually affect the next REL minor release. However there's still a bit of a mismatch here because how do we go from Fedora to CentOS 3M? Effectively for us and for everybody else when CentOS 3Mate came out it was oh here's a drop that's great but how do you make this where it's come from? And that's what I think would get interesting when we have 9. So when we have 9 there's another box in there which is ELN and effectively that's another piece of what used to be internal red-up process that now becomes available to the world. I mentioned before that the first thing folks would do when they start building a new version of REL is that they'll take Fedora and rebuild it with a bunch of different macros and settings to make it an EL distribution. That's the fact that what ELN is doing except it's not doing it on internal REL infra but it's doing it in the open within Fedora right now and Fedora ELN is a Fedora project. So that also moved a large chunk of this process and tooling out in the open as something you can actually contribute to and inspect and work on. And while the 9 cycle as well ELN has started already but the actual 9 cycle is the public one hasn't started yet my expectation is that this will make 9 a much better product and a much easier product to work on and to contribute to as well. With 9, Centus Linux isn't in the picture anymore because it won't exist anymore as a rebuild of REL at least as a Centus product there's going to be there's most likely going to be plenty of non-reddit and non-Centus product that will do that but Centus Trim will be around and it will give you the same guarantees of stability that REL gives you as a distribution you can actually contribute to. I think the block was there from a former colleague of mine to live with who went over these extensive BNFs a lot of diagrams that explain in more detail how this works and also as references to other talks you may want to watch to understand it and I know that every talk that references Centus Trim in the last few months had this kind of diagrams and stuff and I hope this help a little bit making things clear for people that weren't sure how this worked so what can you do so you can contribute to Fedora Fedora effectively influences what we're going to the next major release how can you contribute to Fedora join the Fedora project and do work there file bugs maintain packages, work with maintainers drive changes Fedora has a wealth of opportunities for people to contribute both in Fedora proper and in appell and it's an extremely welcoming community that would encourage everybody to work with if you want to assist in the bring up of stream you can work with the LN I don't actually know very much about the LN besides what came out in the million least and what people have talked to at conferences well today for me because it's going to be at midnight here I highly recommend you join the meetup if you're interested because I expect that will be a very useful place to have this kind of conversations and finally you can also contribute to stream stream is now a continuously delivered version of Rattle effectively it's a continuously delivered distribution that tracks the next minor of Rattle and it is something you can contribute to you can file and fix bugs you can send cool request on packages you can fix things and you can join a SIG or make a SIG SIGs are the building blocks of the Santos community and where all the actual major work happens so let's talk about what we are doing in Fedora as an example we've been involved with Fedora on and off for a while I say we started getting actually involved with Fedora about a year ago engineers from Facebook actually started to read the mailing list to be involved with the project to actual work there not just file bugs and stuff so myself and a bunch of other engineers did work to try and get some of our tooling and software packages with Fedora among other things we got most of our application stack package there not necessarily because we're using it at Facebook because the way we build our internal software at Facebook is very different but because we think it is useful to have it out there so that people know it exists also we think it's useful to have it out there because Fedora runs on a variety of other architectures for example we can leverage the fact that it does CI for these architectures and things like that and we can get feedback on that and at the same time I don't know if you've ever tried building a project that comes out of Facebook or Google or one of these large companies unless you work there it is not particularly fun so my hope is that by these things being actually packaged people can use them and maybe provide feedback more directly we also package tooling that we use in the infrastructure that before we were maintaining internally things like DCRPM that I mentioned before there's really no reason why we wouldn't package those in the distr we've done feature enablement work and we started getting involved with RASIG we also had several change proposals out starting from 1333 we worked with the community to switch Fedora to use a battery fast by default with 34 we are working to have standard compression by default and we are also working to switch from early on to system D I mentioned the DNF copy and write work a moment ago that is also something we are working on in the 34 time frame although that's called the third to 35 because there are more work and finally we are doing work with the RASIGs to make it easier to get packages branched and updated for Apple when needed you can find out into a talk that my colleague Michelle gave a thought to them Michelle works on the desktop team and he's been helping us a lot working with Fedora on the center side we started the SIG we started a SIG called hyperscale in collaboration with other companies and the goal for the SIG is to make it easier for large companies like facebook and twitter but really for anybody else who wants the development work in stream to have a place where they can build packages packages that may be updates of base for example and get them deployed on the infrastructure and leverage the upstream tooling here and contribute as much as possible I'm going to be faster because I realize I'm running out of time we are focusing these on a few things we're focusing these on package backwards so effectively the stuff I mentioned before that we currently have in our own repository on github we would like to move within the SIG because we think it would be a lot easier for people to contribute to and we also think it would make it easier to move changes between Fedora and send us proper so these are all things that we are working on right now at various stages system D something that we will probably publish next week currently something that started working yesterday of course it's a small package we use to actually test this process we have a lot of different members of the SIG working on things like Libre, RESTemon and a bunch of other packages we also use this as a space to publish deviations of base packages with different config settings so I mentioned that we build IP tables for example with the legacy backend still enabled there's not really any reason for not publishing that so we will have that available in the SIG as well and finally for things like the rpm copy on write work that I mentioned that working both rebuilding the entire packaging stack to actually test it why is this something you could theoretically put in a copper we believe there is use for having a version that is tested and actually deployed in production available somewhere for people to test so it can be tested and used before it actually gets published out at Meridian Fedora there's also some discussions about building an alt kernel for sentos to a better feature enablements for things like battery efficiency but this is still in very early stages and at the discussion stage so I don't have anything to show for this yet right now however if you go and get to sendos.org you can find branches for the SIG on a few packages like systemd and there is already a package repository you can enable if you do dnf install sendos release hyper scale and this would give you access to I think right now there's only dwarfs in there so it's not terribly useful but starting from next week I will make sure to post on sendos develop when that happens so people know it's there in the near future we'll have another repository for experimental and then as I mentioned the external work being discussed and we hope to at some point also publish cloud images I want to say a few words about deployment before I run out of time I mentioned we use Chef we've talked about Chef a lot before Chef does config management if you're familiar with Ansible or Puppet it's a package with the same category of product I'm going to skip this because I think mostly everybody knows how this works at this point there are links here to the things that you can use if you're interested in using Chef in your infrastructure with the same model that Facebook uses and with the same tooling and some of this is also packaged in Fedora what I think people are more interested in hearing about is how we do updates because that is something that has come out a lot so we split between minor and major updates for minor updates what we do is that we snapshot the repo beat the repo that used to come from Centus Linux 7 or 8, Centus of 7, 6 or 7 are the repo that comes out of Centus Trim and we snapshot it every couple of weeks and then once we have this snapshot in the repo we roll it out the fleet over the course of two weeks and the way we roll it out is that from Chef we basically just run DNF upgrade with some logic around the 10 queries of course because sometimes you have to do special things especially if you have internal packages that are rebuilt and shadowing so we've done this process in Centus 6 as far as I remember at this point it works very very well and it allows us to do this live on live machines running real traffic and real work without any actual impact all thanks to the binary compatibility that's guaranteed by Centus now this is for minor updates for major updates we just rate provision we've decided long ago that it's not worth the trouble to try and do major updates in place even if it's technically possible we also like the fact that we get a clean slate and that this gives us a watershed event that we can use to also make policy changes if we want to so for example when we do 7 to 8 we enable battery fast by default that requires wiping the drives well yes effectively yes you can do conversion in place but we're not going to do that so things like that are very useful because if you're going to have to rate any way to update you might as well do these things and we have tooling that is generally available to do this leverage maintenance windows so we can do things like the tooling knows how many machines it can take out at the same time without impacting user services and do this in a safe way as I said we've done this a bunch we did 5 to 6 6 to 7 we are at the tail end of 7 to 8 right now and we're going to be doing 9 lightly at the end of this year or next year again link to a talk where I talked about how the updates work specifically right now 85% and 5 and change as of this morning is on stream 8 we do have a long tail of 7 because you always have a long tail when you join these things the long tail right now is things like switches which because as I said we reimage and reboot everything connected to the switch goes down so switches take a while to update because they require extra coordination because they effectively get offline the entire rack and yes the switch is a Facebooker just computers that run CentOS they're nothing special storage is a similar thing where for storage you have special replication constraints they take out all of their app because at once so that is also something that's taking a bit longer and finally containers which are currently still a mix of 7 and 8 and we're hoping to make good headways on that this year and be done by the end of the year coming up stream 9 which I am eagerly awaiting to see the sources of these all I have and I will happily take questions here if there's anything left and if there isn't we'll go on this card. Thank you