 Time here for more systems and after I did my video yesterday and uploaded on the casea issue in the breach and everything We knew at least at the time of publication. I was then a panelist with the huntress for each rapid response webinar Now I'll be leaving links to that longer video there but I wanted to talk about some of the technical knowledge that came from this and It's interesting because there's a little bit more to the breach than just that CVE I had mentioned in the previous video there are a few more technical details that I thought maybe people might find interesting and We covered that in depth and I really highly recommend that you watched that entire webinar If you are in the IT service industry and you want to know a deeper Understanding of what happened how it happened exactly how to respond not just from technical details But from the large overview that was very well covered in that video But I want to just cover really quickly the proof of concept that hunters put together and answer a couple technical questions Well this that information wasn't available to me at the time I published my video So this is kind of like a part to but to cover a couple of those You know minor little aspects that I think are really important and this is the same blog post I linked to and it's being updated So the same links are valid from the other video There's as I said just more information and this even came after the webinar This is on 7 6 20 21 at 7 0 8 p.m. Eastern Standard Time as demonstrated in our webinar today huntress security researcher Caleb Stewart had successfully reproduced the Hosea VSA exploits used to deploy the our evil So to kibby ransomware and released a proof of concept demonstrating a video detecting an authentication bypass an arbitrary file upload and a command injection now why this is so important is because No one knows or I should say not no one the threat actors I was to know exactly how this was pulled off so in order to Understand why the Kaseya VSA system is still slowly being brought online and not just turned back on is because first you have to do and this Is done independent of Kaseya's own testing. They're doing a proof of concept like can we figure out exactly how they did it because you can't Patch for something you can't reproduce. This is what proof of concepts are for so they walked through and they did this in a webinar And this is just a snippet from there showing that yes, they stood up a patched as of July 2021 Kaseya VSA server and they're able to take control over it This doesn't mean this is exactly how they did it But this is a best effort based on reverse engineering the logs to get an understanding now back to that CVE I had mentioned in the other video that was part of it They did talk to the Dutch search and Victor and confirm You know everyone compares notes in the cybersecurity world that are actively engaged in this and they confirmed that part of it Was that CVE but that was not the full extent. It's not like the knowledge they had from that allowed this entire Incident to happen. There were more steps involved So there was yes the known and as we pointed out in that previous video probably since around April We don't have an exact date when Kaseya was notified We just know when the CVE was created, but it is important to know that that is only a piece of it There were some unknowns only known by the threat actors Which was the other methodologies that are being reverse engineered by the teams at Kaseya and Huntress doing their own reverse engineering to confirm this and be done Now a couple things that came out of that as well that was the challenge of working with it It was all the functions that did things like delete all the logs and this was Also done on purpose to hamper the investigation I brought this up in that webinar and we talk a little bit more in-depth about process and procedures, but one of the things that's really important is Logging on a separate server apparently or at least the MSPs that were able to pull some logs was because their systems were either partially and Not completely Destroyed and that allowed for some logging to be on there So that this is really an important point because logging is something I probably should have brought up in another video which it doesn't save you It just gives forensics companies and incident response people a better Picture as to how to reverse engineer this so it can be fixed faster So sorry that you may have gotten hit But do you have good logs and this is one of those things that threat actors were thorough in doing was destroying logs destroying access to the server and Creating a lot of the problems Therefore now one of the things that also came up in the webinar and like I said Please watch the webinar to get all the details But it does look like they were not in there long enough to do the usual exfiltration of data Doesn't look like as in no evidence of that doesn't mean that there aren't other circumstances that maybe they did But this seemed to be kind of a hit-and-run type of situation But all the details are in the Huntress blog right up here all the other details of some of the you know What should you do in the case of a breach? What if you're an IT company that gets breached? What are those processes you should follow and how that should be played out? That was all a big part of the discussion So there's a lot of the management discussion to go with the technical details that are in that video That Huntress did but it's still really a Tragic thing that happened here ultimately as I said in the previous video the same things apply We need to have more thorough security audits against the software that we trust and that does include You know a full code audit of the Kaseya platform is obviously very necessary And I'm hoping Kaseya comes out better from it because for them to be able to write this Required those flaws to be there, which then they've probably been there for a very long time So this code really probably needs some solid review to become Usable again because this is also hampering their ability to get the system back up and running They were told Kaseya VSA users were told on July 2nd of 2021 to shut it down And here we are in July 7th and they're still not able to bring their systems back up to start Managing and monitoring systems because the patch is hard to write when you have a lot of code to fix You want to fix it once you want to fix it right and Yeah, this is why these proof of concepts are out there and so important now I will leave links to all this down below and thank you And thank you for making it to the end of this video If you enjoyed this content, please give it a thumbs up If you'd like to see more content from this channel hit the subscribe button and the bell icon To hire a sure project head over to laurancesystems.com and click on the hires button right at the top To help this channel out in other ways There's a join button here for youtube and a patreon page where your support is greatly appreciated For deals discounts and offers check out our affiliate links in the descriptions of all of our videos Including a link to our shirt store where we have a wide variety of shirts and new designs come out well randomly So check back frequently And finally our forums forums.laurancesystems.com is where you can have a more in-depth discussion about this video and other tech topics covered on this channel Thank you again, and we look forward to hearing from you in the meantime check out some of our other videos