 Now, normally you don't get to film inside of data centers, but I was able to get access to an abandoned one a few months ago that I did a video and you'll find that link down below. Now that video got the attention of a much, much larger modern data center. My friends over at DEFT, well, newly acquired friends, because they watched that video and in that video I'd asked, hey, can I film at your data center? Let me know, reach out. Well, DEFT reached out and said, you sure can. We'll even give you a guided tour. We're going to cover the power systems. We're going to cover the cooling systems. We're going to cover the centrifugal UPS system. And they gave me an absolute brain dump of knowledge that I was able to film, record, and assemble in this video because I knew my audience loved it. This is in no way sponsored. This is just nerds being nerds, completely geeking out about all the systems there. I had so much fun on this tour and I'm so happy I could bring you along. Now, there's a lot to cover so let's get started. Now our tour begins in the centrifugal UPS room because that was just fascinating to me that there's a spinning UPS instead of batteries. But that room is actually the loudest room recorded in throughout this entire video. The audio is still good. You can still understand it. And the first question that wasn't on camera that I asked is how long does it take to switch from a total loss of municipal power to these UPSs running before the generator start? And well, that's where we're going to start our tour. You said 16 seconds. So 16 seconds is the amount of time it takes for one of the generators to go from off to 60 miles an hour in a set. So think of a semi rig doing a quarter mile of a strip time is doing it in about 16 seconds. Wow. So that's what's needed to be able to get the facility from not having power fed by the utility to being able to be self-sufficient on its own. It can range from anywhere from about 250 kilowatts up to about 1.3 megawatts in size. Okay. And what happens is there's a large rotating drum inside there that's spinning, magnetically levitated so that is able to have very little friction in a hermetically sealed container. And what happens is when the power goes out, the resistance of the motor windings slow that container down as it's generating power and it slowly spins down. When the generators kick on and everything's hunky-dory, those same motor winds are now a motor and we'll spin it back up and get it going. How much maintenance has to be? They're pretty maintenance free I'm assuming. They're pretty maintenance free but there is still a very rigorous schedule to it. Yeah. The floor here had to be laser leveled to make sure everything was nice and flat. And then these things have to be calibrated and maintained. And then if there's an issue, your on-site team needs to be certified to be able to do the maintenance on there. Again hermetically sealed is a little bit of a challenge to do on-site but that's why you have redundant numbers of these in a data center. So what we're looking at is the diesel generator backbone to a data center. So power goes out, this is ultimately what kicks on to be able to provide power for the building. They range in size from anywhere from about 500 kilowatts up to about two and a half three megawatts a piece. In this case, this unit has four turbos on it allowing it to generate a massive amount of horsepower. But what's really cool about it is the level of redundancy that's built into this. You have dual starter motors so that if the starter motor has an issue there's a secondary one ready to go. You have fuel filtration systems next to the generators ready to go. You'll notice a raised barrier to be able to deal with fuel containment in case there is an issue. And then usually you would end up having what we call a day tank which is a small fuel storage on-site ready to go next to generators that's able to be able to provide that fuel immediately if they were to kick on. In this situation you have the cold air will come in from this side of the building, goes through, gets stuck through the radiator and then out the louvers on the backside. That will all automatically open up at the point that these things are requested to turn on. And again, these things go from sitting still not on to full 60 mile an hour ear bleeding rush in about 16 seconds. If these come on, run that way. Yes, exit the room. You'll see there are headphones for covering the ear protection by the door there but all automatic with regular testing. Like a giant muffler essentially. Exactly. That is really cool. And then you have like you got the dual filter system here. We have just cool C&E's. Being in Chicago I was just on the U-Boat tour and it's got those giant diesel engines. Yes, very similar sort of concept. You've got redundant, sorry, filters for the radiator system, oil system. Everything is designed to be easy to service and redundant so that no single component can take this unit offline. And that is just an absolute massive improvement to be able to have when you're working on a setup like this. Yeah, and you have the, it's kind of, I see all the isolation to keep this from vibrating because vibration is the enemy of data centers. You've got 100%. So full suspension tray on these things that are load rated and everything has a little bit of a flex coupling so that it can shift a little bit and vibrate without causing problems. Yeah, it's almost like even where it connects to the exhaust or the everything. So because these things do have some movement when they start. They're just so massive. Exactly. And what you have here is the diesel engine. So this is very recognizable, but this back half is the component that is the actual electromagnetic generator. So it engages the clutch system. This turns a whole bunch of windings in size and then this kicks out. Usually typically 480 is 600 volts. Some will go higher than that into a power distribution system. And then there's going to be a regulator system on here that helps to maintain the RPMs so that the waveform of the power is nice and consistent. Yes, because you do not connect a generator at off sync. Correct. You want to in sync and you want the waveform to be 60 hertz because that's what the power is in the U.S. So your generator needs to match that. And if the generator revs down a little bit, your hurt cycle is going to compress or expand depending on that. Yeah, that keeping everything in sync is one of those things we don't, you don't think about when you're just doing a switch at home or something small. But as you scale, it's like it's a two waves crashing of high voltage is going to shutter these things in a way that's actually very scary. Very much so. So in a server, when you deal with redundant power supplies, what they're doing is bringing in two power supplies that convert AC to DC and then they connect the DC bus. DC is a lot easier to do that with. It's easier to balance. But when you deal with AC, it's a lot more of a precise dance that you have to do and a lot more equipment involved to make sure it's done right the first time. Yeah, that phasing is just so it's it's over it's overlooked sometimes by people who are maybe not be familiar with it. But there's a few books on it or regarding this where they talk about how they have attacked facilities and lock just the just the way they put them out of phase was enough to damage them. Definitely. So when you're dealing with the building block of the size, so some facilities range anywhere from a couple megawatts up to dozens of megawatts. And when you're dealing with power grids of that size, one generator isn't big enough to be able to handle what you need. So you have busing or paralleling equipment that's done. And if that first generator attaches to a bus or a paralleling piece of equipment and has a bad waveform for the sine wave, then it can possibly set the standard. And so when the second generator latches on, it says, hey, I can't match that standard. It's going to say, nope, not going to latch on. I'm going to fail. And you'll end up only having one of maybe your seven generators attached. Can't run an entire data center with one seventh the critical power generation. Yeah. And one thing I'll note, it does not smell like diesel in here. It doesn't stink. There's no. You could eat off these. Yeah, the everything in here is it may be hard to tell, but it's really clean for being a diesel room that's got lubers that pull air in and everything else. That's actually something I'm just kind of noticing, especially after doing that little U-boat tour, which you smell, as what someone would say, burnt crayons everywhere. If you've been around just industrial diesel equipment and being in Detroit, I go to a lot of manufacturing facilities. So also cool to see these say Detroit diesel. They've been fantastic. These ones have been in operation for a number of years, no trouble, and they're just still kicking doing great. Very cool. This area is the pump room for phase one. And what you see is the redundant pumps for cooling from this side of the chiller plant to the cooling loops inside each of the rooms. So the coolant goes through, pumps out, routes through. It goes up and then actually goes down routes underneath, goes into every single room. So most of the rooms have their own independent chain to keep that kind of diversity going and keep it independent. But it has to run coolant all the way through the perimeter of the room, because each of the computer room air handlers, they don't have compressors on them. So they're only just big radiators that are taking that hot air, pulling it through the radiator, cooling it down with the coolant, and then they push that coolant right back this way. So these pumps are pushing all the way to almost that back half of the building and back. And what they do is cold water goes in, hot water comes out, and that hot water comes over to the centrifugal chillers here, which then exchanges that heat and concentrates it to the evaporation system and the cooling towers. So for an idea of scale, your normal residential air conditioning unit for a house is usually anywhere from three to five tons. Each one of these green units here is 1,050. There's a redundant number of them here so that we can have one take over if we're at full load, but we're not usually at full load. So that makes it so that we maybe have two or three of them on depending on how much heat generation is on, and they rotate the load across them to kind of balance the hours. Otherwise you would wear out half of the units and the other half would be almost brand new. The coolant side, we have a million gallon water tank over in the corner there and another million over on that side for the other faith. Those are like cold batteries. So think of it this way, it costs a lot of money during the day to cool things down, right? Hours usually at a peak, but at night it's a lot cheaper. So what they can do is at night, when the heat generation is still static, but the efficiency is super good, they can cool down a million gallons of water, drop it in temperature, and then during the peak times of the day, not use the centrifugal chillers as much, but use the cold water to bring into the system to help kind of cool things down. So they use that water as a heat transfer to warm it up instead of running all the other stuff there. Got it. So now you're taking this million gallons of water you chilled down and you're slowly heating it up during the most expensive times of the day, allowing you to lower your power consumption. The thermal efficiency is really interesting because there's a lot of thought put into it. It's not just, hey run until it's cool, it's how do we get the most efficient way to do this? Totally. This half fiber coming into the area, fiber cross connects, we have a redundant side over there, diversity so that if again you take a chainsaw in the middle of the rack here, you're not going to take out both paths. So we have equipment, powered equipment on this side to be able to handle things like waves to other facilities. So that's dark fiber where we're lighting it up and we can slice it up to as many 10 gig connections as we want across that fiber. And I say 10 gig but really there's 40 and 100 gig variance in sizes too and we do that all across the board. We've actually recently had a request for a terabit per second request going across. So we're at those sort of scales where people are requesting that sort of connectivity. Terabit, nice. Terabit per second. So distribution to customer cabinet. So this is stuff where each of these panels contains 144 fiber strands. You need two strands for communication. So we can handle a really nice plethora of customers and we've slowly been filling them up and adding on additional ones as customers' needs have changed. In this backhaul you can facilitate all the way back to like on-prime slice. You can light up dark fiber from an office building within the area and be like, hey, here's where my data, my servers aren't in the server room during your data center. Exactly. And yet it's coming across locally as if you were sitting on site with them. Yeah. We see that very commonly done with office buildings for large organizations where you might want that between your primary data center, your redundant or DR data center, and your office building. And we can set up those rings, help manage it, and make sure that you're going through diverse carriers too because sometimes carriers will go through the same manhole or same conduit and the backhoe will be able to take both of them out in some scenarios. Copper distribution. So anything from IPKVM. So that's your keyboard video mouse over a network connection. Really handy when you need to get into the bios of the server and it doesn't have the ability to do it built in. We have those sort of services available. Then over here we start running into kind of copper. And what's interesting is we dealt a lot with copper originally 14 years ago and it's shifted a lot to fiber. So you can see as the infrastructure has changed out, it's just all fiber. It's fiber. Now we're able to reuse these bundles because we have a little bit of extra slack down there. So when we do equipment upgrades, we're not redoing everything from scratch, but these cables all run to existing panels. So when a technician is doing a new service turn up, they're only working in a panel where they're connecting what's needed. Never are they working on the live devices. I think something you mentioned with IPKVM. Yes. The fact that people don't realize there is enterprise equipment that does not necessarily have IPMI. 100%. Yeah. This is the thing that happens. And when you run into a problem with it, the other option is extended a keyboard and a crash cart to the server and sit in front of it. So these things are really great solutions for us to be able to sit there and say, let's help you out with something. Yeah. We actually have our own version of that using a Raspberry Pi, which is the Raspberry Pi IPKVM service. Yes. We deployed a number of them because we can go and put it in their cabinet, connect it, and devote it to them. And then when they're done with it, we retrieve it. So we have that as a service for our customers. That is really cool. So our Juniper switch equipment, we're a Juniper shop. We're agnostic when it comes to customer equipment. But for us, our preference is Juniper. This is the third iteration of equipment that's been in these racks here. So we've been able to go from Cisco to Juniper and from one version of one model to another version of a different model, all live. Well, the customers don't see any downtime. The reason is redundancy. I have my distribution at one, my secondary right there. I can upgrade one where everything else is being handled by the other unit. That kind of active, active functionality where I know I can flip over in case I have to do firmware updates, software updates, or anything else on this. Flu means it's an Ethernet connection, green. It means it's like a serial or IPKVM or some sort of non-PCB IP connection. And then when we started building out, we had four of these racks devoted, but we built out one. And then as we grew, we built them out. And what happened is we needed a little bit more density. We suddenly had a room that added another 100 cabinets to this room on the build plan. That was never on the original design. But because we do an organic growth, we're able to change our technology to a slightly more dense system that allowed us easier maintenance management. So instead of punchdown on the back, I now have a module that I can crimp, terminate, and click in. Very similar to the Keystone system that you can see in residential wiring. But this will recertify at 10 gig without any problem day in and day out. I've got connections 15 years old that will recertify the same way they did on day one. Modular is the way to go. That's the only way to build stuff. And we have racks that have a higher density than this previous. 48 connections in a 1U size. A little hard to work on. So when you're working with an organization that cares this much and is growing things organically this much, so this is what happens when you start to grow. Like it gets a little crazy. You can take a picture of this. People love this. This is the internet's favorite part of the internet. Totally. This is 15 years of continual development and growth. So this isn't a day one and done. The very first racks were these ones and it extended over here for our customers. Our equipment on the bottom. Customers on the top and it just kept kind of growing that way. Conduits coming in from the underside there are fiber connections from the entrances to the building. So point of entry or meet me room sort of connections. So we have some connections there and we have some connections over there. But inside that and you're doing it all right. I mean this is the little things like the the sleeves. Did you have a picture? Yeah the sleeves. Yep. The cold side is cool but it's the hot side where you see the difference in people's experience and that's where we really come along to help our customers out. 42 inch raised floor. So this is a load bearing glass tile that we had acquired to be able to do this very thing. You can see the color coordinating of the power cables for the cabinets but those large conduits are actually distributing power from the room distribution units to our power panels. It's just because who doesn't like interactive technology. I'm heating up a sensor with my hand here and it's going to open up the rumors there and you can feel how much static pressure there is. It's opening now. Oh yeah. So this is the part that you don't really get a chance to kind of get the experiences. So that is a five ton tile right there. So in theory that has five tons of cooling. Do you know how the term Tom came along for cooling? That I don't know. It's a fun story and it's quick. So it used to be related to how much ice you needed to cool a room. So back in the days when that was the way you'd order for your you know middle western movie theater. I need three tons of ice to be put in and they'd have a fan that would be pushing that air across the ice to cool a room down. So if you needed three tons of cooling that was the amount of ice you'd order to cool a room. Nowadays we know a ton of cooling to be roughly 11 to 12,000 BTU. I say roughly it's 12,000 BTU but when you do calculations in a room we like a little headroom so we calculate with 11,000 BTU. So you have roughly about four kilowatts to a ton of cooling. So right there you just experience 20 kilowatts of cooling. That is neat. People don't really understand what it means until they get to experience it a little bit. This is a 20 ton, sorry 40 ton air handler. So what it has is a big radiator coil here, variable speed motors for the fans on the underside and it's just baffled up to get the hottest of hot air because the hotter the air you can get into this thing the more efficient the thermal transfer is to the cooling loop, more heat it offloads, the colder the air can be on the underside there. And that solves the problem. That solves the problem you'll notice. Redundant power connections on these, they're automatic but if we had to go to the bypass or the utility maintenance grid we had the ability to manually switch it. What I said to the power grid in this building is a little different than everywhere else. So there's something called STS, static transfer switch in the data center and the purpose of that device is to take power from one input and switch it to the other. There's two styles, older fashion styles which is mechanical where it takes the connection, breaks it and then makes a new one. You have an interruption, power goes out, servers turn off. Then there's like the SSDs of the market, solid state static transfer switches, which is what these are. They have the ability to switch power faster than your power supply can notice. So right now these units have two inputs feeding them and they can choose, hey I have a primary input and a secondary. If the primary input has any problem it switches to the secondary. Because it can do that I can take power from here, run it to one of my power panels and you have dual fed capability feeding each panel and you have two power feeds going to your cabinet. So now we have the switching at the room. That's expensive to do because these things cost a lot of money. So traditional data centers will usually put this in the back of house next to the UPS. So they put the static transfer switch on the generator such UPS side and the utility as the inputs. So utility goes dark it then switches over to the generator turning on and the UPS rides it as the output until the generator turns on says I'm good and it latches on to the generator side. That creates a pathway for the SDS to use and it starts charging the UPS backup. So UPS static transfer switch generator utility and that's the common building block. Wow this is what 1.3 megawatts of power distribution looks like often or not you have very large breaker panels that are distributing power. So the power comes into the room breaks out into a larger than residential size distribution panel and then those go to each of the SDSs which then go in turn to each of the power distribution. 1.3 megawatts that's just 1.3. So that's that is a small city. Yeah and this facility has 36 megawatts of capacity in it. That's we have facilities that we're in that have up to 300 megawatts of power too. That's wild. It is really really crazy. When we're dealing with co-location not everyone needs a full refrigerator size cabinet for their servers. So we offer half cabinets and quarter cab compartments. I think they're on the other side there so that we can kind of right size. Sometimes you just need your redundant equipment or some offsite so we can help alleviate some of the costs by putting two customers in the same physical space. So again the quarter cabin idea so the idea is that you have a fully isolated secure panel area where you're able to kind of have your space but you don't need a massive amount so your offsite your redundant. It's worth noting too these are not your generic all the same cabinet keys. Correct. So everyone has different needs and to help facilitate that. So this is a combination lock. We have a key to be able to get in for our side for our technicians but there's a code that the customer has and they can use and they can give out. We can change the code at will for them so that they can kind of rotate as needed. We do offer fingerprint based and card access based biometric systems for the cabinet level as sometimes they actually have a requirement says I need a card reader or I need a fingerprint reader but they don't need a cage to be able to do that. Right. Now the next and final stop on our tour is the parts room. There are several technicians working in there and it's cool seeing all the work they do on behalf of people that have things in the colo so obviously you can put stuff in the colo and maybe go there yourself but what if you went there and forgot a cable or what if a drive goes bad and you just don't have time to drive out to the colo they have spare drives they have spare cables they have a lot of them the reason why we have white yellow blue purple and green network cables isn't because we really couldn't make up our mind as to what color we want standardized on it's because maybe you have your primary switch and your secondary switch and you want to make sure visually at a glance when you walk up to your cabinet every server has both connections. Yes. If you bring your equipment here you have the spare parts that will match your layout. Exactly. That's a big difference. And if you have your own standard we can help you maintain it if you don't have a standard we can use our best practices to help you with that. We also can carry cat five cat six but DAC cables when you start getting into the 10 gig 25 gig and 40 gig things maybe you don't have a five meter 10 gig DAC and you came on site to put up your new storage array and oh I don't have the cable or the connection to do what I need well this is where we're able to help you out because for us it's a lot easier to have it on site and ready to go. I think it's fun too because you don't realize until you sent some guy a two hour drive three hour drive to the day center like oh you forgot the DAC cable. Well like you ship something off like the manufacturer's like here you go you just spent forty thousand dollars on this new storage array and it's great oh and they didn't include the cables right or the rack mounting kit so by having the parts on site here we're able to help you out and maybe even save you that two hour drive because yeah we're here 24 by 7 so why don't we rack it up for you absolutely so SSDs they wear out they go bad and they're specific to size the most common failure now and the difference between oh that's 800 gig versus a 960 gig can be the difference between you having a good day and a bad day very much so for us we stock all the common sizes of both enterprise grade rotational drives to solid state drives to PCIe memory based systems so that we can plug it in and help it out we have to maintain systems ourselves so we need these spares for our own internal use but why not be able to make it available to our customers but the fact that you have all this too sometimes you have a half-height PCI slot and a full length card uh yeah and lord knows you've uh had to connect a storage array and like which one did you need the mini sass or hundreds of power distribution units with their variations between manufacturers and feature sets sometimes when you're holding a cold spare for someone you know what you need has to fit this way where the outlets are going this way or maybe you need something that's a little wider if you have one in production and it fails on you you have to have a spare the problem is well it depends did you have something that ran 208 volts at 30 amps or where you're running something that is you know 40 or 50 amps is it a twist lock plug that looks like that and then if you come over to this side or is it one of these ones where we start getting into kind of the fun 50 amp level uh heaven forbid you are at the 60 amp level which gets to something around that size oh there's the beefy one and i think we have one more fun one here i should have a 60 amp receptacle floating around under here and this kind of comes down to the spares here we go so if you took the metro at all yeah that is the twist lock system you start running at at 60 amps when you're dealing with multiple phase power same style connector used on the uh train systems yeah neat so now the dilemma is great all these choices which one's the right one well that depends the problem is you have all these different form factors because you have all those different connectors there and different requirements which means when we have to ship something to a different country that has a different power standard what do you choose well it's easy to make a mistake here when you have all these options here and they're all very similar to each other so what we have over here is really cool product from eaten and they call it the u p d u so what you have is a p d u that has a connector here a multi pin connector that depending on how the cables wired up changes how the p d u works so this thing can be a 20 amp 208 volt single phase power distribution unit that can handle about three kilowatts of power or i change the cable out and it can be a monster that can handle 17 kilowatts of power oh man all without having to change out the chassis very cool and we'll show you a little later on the reason not having to change out the p d u is so important because these things get buried inside the cabinet i like the little green test lights totally it's just cool so these are brand new now imagine if i had this chassis i could buy a couple and have them at every data center we have across the world sitting there waiting you order a 30 amp circuit that's 208 power great i send out a cable FedEx envelope overnight easy international shipment you're ready to go oh you had a problem with one of these units in your cabinet great i unplug the cable pull out the box put a new box in done the most common scenario we run into though is hey i started buying services from you five years ago when my power needs were this but now i've grown to the point where i need something more what does it take to upgrade well it's a two-person job of installing additional power circuits on the floor then removing one of these things physically while it's live putting a new one in migrating all the power plugs and then doing that again and when you have to do that twice on a live environment it takes a lot of time and it's highly stressful or i just change out the cable and you're done that's just simple it's truly insane how elegant the solution is for being able to handle both us north america south america and international european asia pacific power needs all with the same cable a big thank you again to deft for this tour allowing me access to their data center this was just a lot of fun really enjoyed it hope you enjoyed it as well leave your thoughts and comments down below let me know what you liked or didn't like like and subscribe see more content on this channel and of course sign up for my forums forums dot learn systems dot com to discuss this topic or any other topic you've seen on my channel is a great way to engage with me and thanks