 I will zip through the first couple of slides very quickly while we start up the rest of the stuff. Many of you probably know that, or are thinking to yourselves, didn't Fedora just move? And yes, Fedora has done several moves in the past. We have had our first data center that I was aware of. There was ones before this back when Fedoras really started. But the one that I became aware of was Phoenix one. It was a set of four racks in a small data center, which was a actually a military hardened data center for in case of nuclear war type thing. Very nice location. Completely locked down. You had to go through cattle gates to get into the building. You had to go through cattle gates several times to go to section to section on it. We're moved time. It had no IPv6 and that was something we were looking at. And my favorite thing was the old man behind the screen. Which, you know, standard, I mean, like a full bullet foot, foot glass. He was an old man there who had a sawed off shotgun. So I always felt like this when I walked in. We moved into Phoenix two, which is actually the first thing is in Mesa, which is a suburb of. And we moved from Mesa to Chandler. This was done in about late 2009. We started, we had three or four racks. We would grow and we would shrink. OpenShift took a couple of racks for a bit. It was a really nice data center at the beginning. It had an automated vacuum powered ice cream machine. It had an automated coffee cup thing that would serve you a different types of coffee depending on what you're doing. And this was Kevin's. I always think of Kevin when I see this cartoon because when the coffee machine went away, that was the major crying point. Do you want to say anything, Kevin? Yes, you can say something. So the next I will start the slideshow. All right. Start from current slide. All right. I think. Yes, I had exchanged the wrong slide. Give me one second while LibreOffice does its thing. All right, so yes. After the machine went away, we had a very hard used Mr. Coffee that usually had what your way of making coffee was pouring water over the dry grounds at the bottom and drinking whatever was out of that. Am I sharing still? Yes, yes, you are. Is this my slide? Yes. Yeah, you have the. Go ahead. I have one, two, you have four or two, three, you have four, five. Got it. Why move? Since we were in Phoenix. The space was coming up for renewal. Also, our space was kind of glommed on to the edge of the rest of Red Hat space in that data center. So it was kind of part of another big area. So they wanted to kind of cut off that little area. So there was no IPv6. I can't. Sorry, I can't raise the microphone anymore. It's cranked. And if I mess with it, it goes away. So I'll try and shout. Sorry about that. There's no IPv6 available there or any plans to do it. We had bottlenecks with both storage and networking there. Our storage was shared with some other groups and you know that sometimes caused problems for us and sometimes for them and it was just all kind of mixed in and the one gig networking. Most of the time was fine, but occasionally caused us bottlenecks. The hardware overhaul affected too many services at once. So if we tried to overhaul stuff, we ended up affecting more than we intended. And yes, the last point, Phoenix in summer is very hot. I recall a couple of times working there. And you're in the data center and it's 60 degrees Fahrenheit, and it's dark. And you walk outside and you open those doors and it's 120 degrees and very bright. That's always fun. Or at 11 o'clock at night, it would be still 120 degrees. Yes, there were a couple of times I walked out where I thought it was night because of the Hababs that would float in. And so that picture there is one of those lovely ones. Yep. Anyway, so our goals for moving to the new data center. We were going to deploy 10 gig networking, whatever we could. We don't have it completely everywhere, but we, we plan to put it on a lot of the bigger systems. Actually, every system now has 10 gig. Oh, it does. Okay. I thought there were still a few hold out. Nope. No, that was Matt's present to us. Cool. So IPv6 is available there, although right now you may notice it's not enabled. We ran into a problem with the deployment and we decided to disable it for now and wait until things stabilize and then re enable it later in the month or maybe next month. But we should have good IPv6 connectivity there for whatever we want. We have dedicated storage there. So we got a rack of storage hardware planned out from from the Red Hat IT storage guys. We're not sharing that with anyone and it's sized for us and it has room for growth and it's in general just a lot faster and better. There's much better peering at the new data center with all the cloud providers, AWS, Google cloud, IBM cloud, you name it they have point of presence there. So doing networking to them is very, very easy and very fast. And this was an opportunity for us to clean house consolidate we had a whole bunch of stuff that was spread out everywhere or not together with things that were like it or cables running across four racks to get over to the switch that had a port and that kind of thing. Also, let us remove a bunch of the outdated hardware that we were planning to get rid of at some point anyway, but this gave us a good refresh cycle to get on new hardware from the get go. So, So how do you do a move. It's a long journey, but the first thing you need to do is be prepared for meetings, lots and lots of meetings. You think to yourself well it's just a bunch of service you just put them in a box, you send them over to your site. They'll undo it. It'll all be done in a weekend. I had multiple people tell me this and I was pretty aware that this wasn't going to be occurred but it got to the point where because there were so many groups involved inside of red hat outside of red hat at the data location. I had. There were several times I had an IRC meeting going. I had another meeting here and I had on my phone with a earplug to the other ear, the third meeting. It was a lot, but one of the things I found out and it's clear why we had so many beings is the fact that you get so many people going into a meeting. They have a lot of other things going on in their lives they they come out of the meeting with something that they thought they heard. And by the next week you have a completely different plan. You have 20 different plans going on when you need to have one. We had a huge number of meetings in the middle and by the end of it we had a lot fewer meetings because everybody had gotten aware of the fact that this everybody was on step with everything it took a while to get everybody together on things. And every time you get in somebody knew you'd had to have a bunch of meetings to get them back. You know, it was like herding cats, a lot of cats. The plan for doing this turned into a very large, large thing. Kevin came up with a minimum viable fedora. We worked on what needed to be done there. We built out a lot of the, we took down CommuniShift, sent it to one center. At one point we thought we would have everything at one center, but then we realized we had fewer racks than we were expecting and that meant those things had to go somewhere else. So we moved to RDU at the same time as the minimum viable fedora systems got moved from Phoenix to, we then set up the basics over a weekend and a half, week and a half, sorry, and then got the systems going. Middle of March. So, originally we thought we were going to be doing this in March, and it ended up moving to May, and then we finally got everything just pushed out the door in June. Most of that was due to the fact that everyone else is that COVID-19 removed a lot of mobility and such that we could travel to data centers and do things. So instead we had to then engage on hands help, which also meant more meetings to get them up on speed and what needed to be done at each site. Next week. Yes, sir. It has IPv6, as I mentioned, we're going to probably be turning that on in the next month or so. We replaced a whole bunch of old Dell R520s with fewer R640s, and they're really nice. They have a lot of resources and we can really densely pack VMs on those things. So it's been really great. So we have less room to grow there, but we have a lot higher density there. So we're using the space much more efficiently, I think. Again, the very fast interconnects and peering and 10 gig networking for pretty much everything. So that's really improved things like schlepping images around or thinking data from one place to another, et cetera, et cetera. Another thing was is I remember the first while we were under the minimum viable Fedora, and we had we were expecting we would not be able to run as many services as we had. We were able to run more services than because the new servers have a lot more memory than we usually get. But they also the 10 gig networks allowed a seem to, even though we never saturated a one gig, the latency and such between the two on the one gigs was enough to cause things to be slower overall and build seem to have been improved under the 10 gig. Yeah, one little tidbit here is that the updates pushes we push updates for all Fedora, all active Fedora and all active apples and flat packs and containers and all those every day. And now they're taking all of them about an hour and a half, which is significantly less than it took in Phoenix for that pile of things. Yeah, it was like a day at one point. It was it was bad. So, next slide. Yes, sir. So what's left undone, and there are still things that are undone. We moved some amount of things as mentioned, we didn't have enough space, or there were various other logistical issues. So we moved some things already to and we thought at the time, you know, these things are going to arrive. That's local to smooth. He can go there and work on it. And it should be pretty easy to get things back up. Unfortunately, COVID-19 hit. So there's very good restrictions on that data center. And we're still trying to finish. Which is the higher priority. So can't really go down there because he's working on ID to. And we turns out we need to re architect the way the network is set up because they're in a rack. So there's all kinds of logistical issues around the already to stuff. And we have been working to try and bring some of that online. Some of it's obviously more important than others retrace instances there. The copper ppc 64 le builders are maintainer test instances. And it's just going to take more time and it's going to be kind of its own project. It kind of was part of this project, but it's not had focus because everything has been trying to finish ID to. So we're, we're going to try and concentrate on that as soon as we finish up the ID to stuff. And hopefully get some of that stuff online. But it's just been a lot harder than we anticipated staging is still being brought up. And this has been complicated a little bit by the fact that we're building up our new authentication system and staging noggin. So we want to get that all set up so that we can deploy that later in production. That's going to take over the next few weeks or so to finish bringing that stuff up. And there's some open QA resources, the secondary, sorry, alternative architecture stuff arm ppc 64 and some more workers for x86. And those will probably come online the next month also. They're there they're they're ready to go we just need to get them installed and set up on the right networking. Okay. This move really needed a lot of help from a lot of people and there are not enough thank yous to everybody who has put up with this from family members to coworkers who dropped other projects to a lot of patients from a lot of people who had things they needed done right away, but had realized that they could not until sometime in the future. I, this is not a complete list. We've. Thank you all organizational helping heard all the various groups that we needed to meet with and setting things up and making sure things worked. Yes, yes, this project would not have gotten completed without ifa's amount of putting things together and keeping us on track and taking a lot of crap from people. So, thank you. So you guys, you did all of the work. Thank you. All right, we're down to the question section. I will unscree on to the full screen. I go back to the question picture because I like that picture. I was actually going to be the only image I was going to use for this entire talk. Just variations of that one meme. But so I am here to answer questions do we have any in general I think that the it'll take about a month of planning to figure out all the V lands and such how that's going to be done and where they're going to be. And then we can look at implementing the plan for the rdu stand up. So, it's not going to be as big as this, I hope, but because it's taken a while and a lot of other plans have been waiting. Hello, ifa. It's okay. It's all right. We are not using Seth. It is just not going to. We do not have the physical hardware. Seth, basically. Seth and cluster and the other things basically trade. The smarts of a net app to a smarts of a. I'd have to take the net app down and fill it up with equivalent number of storage disks of dels or super micros or whatever, and I just don't have the space for it. And so to go to it, I'd have to drop a rack of other equipment, which is then a sign of something else that's going to be done. Because the red hat folks are managing that for us. And we have some other large needs at the moment. We've looked at some other things, but yeah, we just have to. So much in this and managing and. We don't have the staff to manage it. We don't have the time to manage it. It's everybody always thinks, you know, I've got this small thing and I've done it and it shouldn't take that long and or I have my set us. But we have to mirror our stuff throughout the company. We have to transfer the data around. There is a large amount of things that require that want zero expect zero latency. And the setup of a. Ceph cluster or such is usually something that is asking for, you know, and front nodes, why brick nodes. K. And then they also want to have some control over the switch switches which we don't have. So that's one of the things it's. Setting up the stuff thing it's going to be a as big a would be as big a project as moving us across the country. About the mirroring feature the net app. Yes, we are. They are replicating all of our data over to our to another net app cluster there. And then that data gets replicated on a different set of networks to throughout the engineering. So it's there's not there's multiple snapping going on. We also are being able to use some of the features now because we have AWS so some of the stuff we have backed up that we're probably we have to keep but we can't we don't ever want to look at like old builds for the s390. And the PPC are being snap mirrored off to AWS in glacier. So, allows us to do some things. I expect we could do it all in stuff or some other storage thing it's just, you know, time. And you either get you can either we can either set this up or we can do builds. And I think everybody wants the builds as much as they want something else. All right. It was one question earlier. I don't know whether you just have talked about them. Zach wanted to know what the oldest box in center was and also was everything on one gig network. So on the old network. Oh, that was the other thing we got rid of. We got rid of some IBM X 3550s that I had gotten in 2009 and had been using for various services. Until we that was part of it was not just the R 520s which we got into 2012 I think there were several back before IBM sold these to Lenovo so this is they were the X series system X systems that we had gotten and our data analysis backups law some log analysis and some other things and all of that cloud hardware was on that. Then we had currently the oldest equipment I think we have is we have some six year old Dell are 630s and a couple of us loaner equipment that IBM Grish loans us which are for the power systems. There's a couple of power eights that are probably of the same error as the six years are older. No, it does not. We jokingly asked that question a number of times, but yes, yes. Thank you, Miss. Thank you. I'll have flashbacks now for the next three hours. For anyone who doesn't understand that it's a running joke because there's certain people who are very interested in internet to but most for the most part it's not very relevant. We had so the final question I think will call us quits. We are management software was split across multiple things I had for the Dell for the IBM systems I keep a Windows XP box so I can get into the management systems of the old IBM's I can finally retire that so that I don't have to do that. Currently the and the Dell 520s and the 20 series were all I drag sevens and I found that Java 10 does not work well with it. So we're now able to use HTML consoles for most of everything, which is okay except I'm an Emacs user. And so my hands use a lot of control keys and that causes the browser to print things to my printer. Another challenge that we're going to hit very very very soon is that Firefox just turned off PLS 1.1 and 1.2. So that's going to break that quantities of these things so we're going to have to keep an old VM around or something. Yeah, I'm thinking we'll be. Yep, I think we'll be setting up a squid proxy is the thing is what people have been saying, because it's all those systems are on a locked down network anyway. Uh, that's about it for us. And I hope you all have enjoyed this move as much as I have. Let's never do it again. So we'll be doing another move in three years. Thank you. See us again then.