 One of the important things to remember, and even myself, I like to remind myself of, is security is not about a guarantee that everything will be secure and nothing will ever happen. Security is about mitigating risk, and that risk sometimes comes at the expense of we have to trust others, because if we trusted no one to make it a lot easier, but then, you know, interoperability with things, et cetera, et cetera, would be a lot of a problem. And today we want to talk about ConnectWise. Now I'm a ConnectWise user, as in a user of their tool called ScreenConnect slash ConnectWiseControl. It used to be called ScreenConnect, they purchased it. That's my only connection to the company, which is also interesting because they didn't send me any notices, but I guess this particular division of their company did not have any compromise, but we're going to dig into this a little bit. So one of the worst case scenarios that I can think of is not when you've made a mistake yourself, those at least are admittable and understandable when you make a mistake and you can understand how that mistake was made and then learn from it and get better. The bigger problem is when the companies that you bought the tools from make a mistake. And these things happen. This is part of the security model, and this is the outside risk. At some point, unless you are going to grind the sand to build your own silicone and write all the software from top to bottom yourself, you do have to delicate the trust between the hardware vendors and the software vendors that use and the ConnectWise situation is pretty bad because I know they're saying it's a ransomware attack and I believe them to be an honest company, but ransomware attacks are the beginning of something. It shows that there was at least some niche by which they got in some crack in their security or in references in a couple articles. This was posted on April 23, 2019. ConnectWise takes cybersecurity seriously and we realize rumored and confirms security incidents create stress and concern for our partners that's connected by CEO Jason Maggie. Once become aware of an issue, we are proactive in taking steps to resolve and or make our partners aware of the risk. Now this is not the breach we're talking about. This is what happened with the way pro attack and the way pro attack. They use ConnectWise screen connect tool, same tool that we use in order to exfiltrate data on things like that. Screen connect, ConnectWise control is a legit perfectly good tool, but that tool got caught up because well it was just happened to be, it's a great remote tool and even the bad actors that attacked the way pro company thought it was too. So that's what they used in order to exfiltrate data. So that kind of put them in a crosshairs and this article goes on to say and I'll leave links to this in the description that they take security seriously, which is like that cliche term that everybody says. But let's put that to the test now because now let's talk about the breach that happened. So this is only a couple weeks, May 13th, 2019 after the original article was posted, ConnectWise hit an EU with ransomware attack. So in following what they're we plan to be transparent, they did release a lot of information and it's posted here on this site. The May 3rd attack came through as an offsite machine that ConnectWise used for cloud performance testing outside the network. ConnectWise says it has hired forensics firm to investigate the attack and the steps have been taken to make sure the attack cannot be duplicated. This is important and this is part of the operational thing that we want to see is where they kind of give us a debrief and talk about, you know, what happened, how it happened. And it's interesting to see that it was a cloud performance testing machine and I want anyone watching us to think about this because these are where these mistakes are easily made. We spin up demo servers all the time here in our office and if you don't follow your same operational excellence that you would want to for a production server, you can easily get kind of lazy and go, well, it's just a demo server. I want to see how it performs under this configuration and put a public facing, but I'm not going to bother following standard security procedures because that would take time and our goal is to get this performance testing done. My assumption is that's what's happened. I'm hoping they give us more detail. They did not unfortunately we can read a little further and start a couple, but they didn't give us deep detail about exactly what happened. And this is the part that concerns me the most. Our investigators confirmed that the ransomware variant used attack only, generally only encrypts files to accept a ransomware payment and is not designed capable of reading, removing or altering data. We're going to call this out as a lie. If it's encrypting data, it's altering data. That is altered. That is a fact. Now the other concern is of course, was this an attempt to burn down a house to hide the robbery. That's a phrase I kind of like that could be applicable here. And we don't know. They were obviously in. They blame the performance server. I'm sure they have good log tools, but I'm hoping to give us a forensic debrief because I want to learn from this. I hope everyone else wants to learn from this. One of the things I've disliked and I've run into with these corporate security companies occasionally is they're very opaque. We were hacked. It was fixed. I'm hoping because this is specifically an MSP tool and to restore confidence, I get it mistakes are made and something happened, but I would like a exact debrief as to how it happened so we can all understand and learn from it. If we understand these attack vectors and are made more public or they're disclosed, I'm not asking for any personal details of any employee of who exactly clicked the phishing email or however that may have occurred inside there. But the general methodology of attack was some details exactly like how they pivoted through the network, how they were able to use ransomware to do that and whether or not we should be concerned because did this destroy logging servers? Were there proper logging servers? We would like to know because like I said, they burned down the house to cover the robbery. What if they went in there and acquired passwords, but then went to the encryption to like set it off and now we're running back and restoring and you're restoring over the evidence because one of the things you want to do forensically is leave the machines completely intact, spin up completely new ones based on backups as opposed to overwriting. Which procedure did they follow? How do we know the forensic integrity and why did they use cannot alter data when technically that's exactly what the cryptoware does is alter the data. Now, the last thing I'll read to you is the response that was sent out to some of the partners. I'm not a partner so I'm going to read this over from Reddit and leave you a link to it. And this is the letter connected by SENQ. Connected by SENQ is alerted by our internal monitoring and security systems that some of our SQL databases in our EU AWS cluster were not accessible. We quickly realized that several servers were inaccessible due to critical failure. Being crypto-lockered, it sounds like. Our incident response procedures were immediately enacted. Our internal team responded within minutes to assess the situation began to monitor the environment and analyze the alerts. The servers were immediately taken offline and access to the entire cloud network was restricted to a select number of colleagues. Our initial examination pointed towards some type of malware. The cloud team built and deployed new AWS clusters with known good backup restorations to contribute to the downtime experience with EU partners. As our investigation ensued, our team discovered that the malware was ransomware. All partner access was restored by 316pm BST, email connector service by 420 BST. Reporting services are back on line 515 BST. A third party forensics firm was engaged to perform a comprehensive investigation. And I said it looks like they did follow a procedure. Keep these machines aside. This is all based on the letter that they sent here. And like I said, this letter is posted on Reddit. I did not get this letter. I'm assuming because I'm only a connect-wise user from the connect-wise control, not an actual partner. That's not part of our MSP stack other than that. But it's still interesting. It's still a lot to think about. I'll leave links to all this so you can do some further reading. And I'm hoping they give a full debrief, maybe publish some details. As you know, like I said, I'm not looking to expose any personal details, but to gain a better understanding of how the attack happened, what they're going to do to prevent it. I mean, obviously they're going to probably follow security procedures on these production servers or non-production servers that are deployed for testing. I'm assuming that's one of the mitigations. But transparency in these companies is what helps restore our confidence in them. It's not that I'm saying you shouldn't use connect-wise or that they're some awful company. But I am saying that if they want to redeem themselves and all these companies do, their goal is to make money and say this. And this applies to any company and any tool that we use as well. The transparency is what helps restore confidence. I'm really doubtful anyone from connect-wise is going to reach out to me. But this is something that I've discussed with other times when I've covered breaches and I've talked about Casayia. These companies sometimes try to be less than opaque because they're like looking at the numbers and looking at the brand going, no, no, this will hurt our brand and hurt our numbers. Look, we're all kind of realistic that security breaches can happen. They're worst-case scenarios. What's important is the response to them, what we're going to do to solve that problem and how you're going to rebuild your confidence in your product by being transparent. And it also serves as learning opportunity. Maybe it was a vector that we did not thought of, but understanding those attack vectors helps all of us get better at security. So I always appreciate when they have this, a good debrief. So I'm going to look forward to that. I don't know if I'll ever actually get a good debrief. If I do, I will certainly go through and read it and possibly share here as a video if it's got some interesting or something worthwhile reading in there. But the other side, we're hoping it's not, is that whoops, someone forgot to turn off, someone forgot to turn on 2FA. We've seen this with some of their companies that have been breached, where they just didn't follow good security hygiene or general practices that even they recommend. I'm not going to call out right now because I can't remember their names. There's been a couple I think I've covered on my channel before. And many of the security breaches we've talked about and how they get hacked have been the same thing. There was some less than stellar conformity of companies not following procedures and that's often how the breach is, which is also a reason they don't like to publish because it's just embarrassing. They're like, yeah, well, the sales guys thought 2FA was too hard. That happens sometimes. All right, thanks. And I'll leave links to all this below. If you want to hire us for a project that you've seen or discussed in this video, head over to laurancesystems.com where we offer both business IT services and consulting services and are excited to help you with whatever project you want to throw at us. Also, if you want to carry on the discussion further, head over to forums.laurancesystems.com where we can keep the conversation going. And if you want to help the channel out in other ways, we offer affiliate links below which offer discounts for you and a small cut for us that does help fund this channel. And once again, thanks again for watching this video and see you next time.