 Microsoft's South Central US data center outage takes down a number of cloud services, but I actually like the headline from the register better Of course, they always have great headlines Thunderstruck as you're back in black out after voltage causes flick the switch So in short what happened here a lightning storm shook Texas facility all night long. I just I gotta admit I love that line, but Here we are almost 24 hours later. This is the refreshes who have got the latest page right here All right, this is the Azure status as of Wednesday, September 5th at 844 a.m. And we're still having warnings on here, which is better than an outage. They're still working on restoring services but Do we seen some failures in redundancy and I thought this was kind of interesting and I believe it could be related to this and This some of the application insight that's done What we seen was office 365 fail, which affected a lot of our clients people couldn't log in and It was kind of a mess So definitely crazy when this happens because there's not really when you're dependent office 365 and email and everything else There's not a plan B for that So it's feeling the interesting effects of the cloud and what I figured I do here was talk a little bit about the data center So 5150 Rogers Road in San Antonio, Texas is where this big giant data center resides I found the information on here. Apparently it's 477 the owls and square feet according to This article here I didn't measure it's definitely big. I kind of like how it's nondescript There's nothing more than an address in the building. We pulled back a little bit here to look at the data center itself It's big and I don't know why it says Chrysler Group on it that maybe is some management company that manages the property I don't know but this is 5150 Rogers and It's quite a big facility So it is definitely huge and apparently what happened was with any large data center the cooling towers Which they have some on this end and some on this end here I'm guessing and They're not always super open about what goes on inside these but this probably cools this side of the data center This cools that side, but the problem wasn't like a power outage The problem was related to the cooling system being taken offline and they didn't want the servers to melt Servers will melt and just destroy themselves if they don't have proper cooling And if you haven't ever been in a data center, the noise in them is a little bit loud And most of what you're hearing you hear all the fans and whatnot in the servers But you are really here in a whole lot of the cooling system going to try to remove all the heat as these systems generated It's always the trouble with big servers and data centers like this But it just had me thinking a lot about the lack of redundancy we have in the They use the phrase too big to fail It's it worrisome when they didn't have a smooth transition for the office 365 Now I get it if you're hosting your own app Inside of Azure and you want redundancy you host that app in another data center either Another company or another piece of Microsoft So you have geographical redundancy in case anything goes down That's part of the design what worries me is that Microsoft had so much Authentication being affected when one data center goes down and they have numerous ones when you look at the Azure status page You know, we've got East East US to Central North South Central, which is the Texas facility in question here West Central West US Canada East Canada Central Brazil South they have a lot of data centers here in the Americas And Then we have the Azure government ones Azure Pacific and then the Europe ones So there's a lot and I noticed that this application insights and I'm not an Azure expert So I'm not exactly sure what all application insights as I know. I mean, I read the blurb here It's actionable insights to application performance management But that appears to be affected throughout from Europe to there. So what I'm Surmising here, and I'm hoping there's a better debrief from Microsoft So I like when companies give us a this is what went wrong and the infamous several years ago Playbook failure at Azure, and I'm sorry at AWS when basically someone had Typed something and it was not filtered. So instead of say we're gonna take down 10 servers We added a zero and we took down a hundred and took down some of the systems over there so it's really interesting to Think about the scale and size when you talk about a 477,000 square foot data center And this is one and then our reliance on these things so many companies using it and relying on it and you are Disrupting of businesses you're disrupting the flow of business because we've become critically Attached to these systems. We don't always have a plan B to work without them So it's still interesting. It's still something to think about. It's you know a pain We mostly are you know, we're small companies So we're using a lot of self-hosted stuff, but we do use G Suite so GC were to go down We would certainly be very lost in terms of being able to send emails Which is a one of our primary communication systems and that's how a lot of our clients that are using office 365 where They were just stuck. They are like well, it's down and we're down with it Therefore insurance quotes can't go out processes can't be done billing can't be done There's just a plethora of things that go down So it's a lot to think about it's one of those You know, I'm hoping we get a really good debrief from them other than it exploded at a data center from a lightning strike But nonetheless, it's definitely something to think about when you're building your redundancy plans. How do you? Deal with it and it's probably good idea to at least have some process in place Obviously if you're dependent on an office 65 for email, you can't send emails What are you gonna do instead? What's your process for contacting your teams? What's your con process for what your staff is gonna do instead? So that's a little bit thinking and planning that people should have because there's this big assumption of this giant cloud server won't go down and it's redundant across multiple regions, but We still seem to have a single point of failure. This went down and took out things with it that people are very dependent on so Still interesting. It's still it's almost 24 hours later, and they're still just feeling the recovery So this is this happened and roughly I think it was on 10 a.m. Yesterday, and it's 850 a.m. Today And you know like I said things are getting better, but boy This is taking a long time to restore and making people hopefully wake up and think a little bit more about this and hopefully have a plan B Thanks Thanks for watching if you liked this video go ahead and click the thumbs up Leave us some feedback below to let us know any details what you like and didn't like as well because we love hearing Feedback or if you just want to say thanks leave a comment if you wanted to be notified of new videos as they come out Go ahead and subscribe and a bell icon that lets YouTube know that you're interested in notifications Hopefully they send them as we've learned with YouTube Anyways, if you want to contract us for consulting services You go ahead and hit launch systems calm and you can reach out to us for all the projects that we can do and help You we work with a lot of small businesses IT companies even some large companies and you can farm different Workout to us or just hire us as a consultant to help design your network Also, if you want to help the channel in other ways, we have a patreon. We have affiliate links You'll find them in the description You'll also find recommendations to other affiliate links and things you can sign up for on Lawrence systems calm once again Thanks for watching and I'll see you in the next video