 Parogi Power is going to talk about teaching socks, building one. All right, it's my pleasure. Andy Johnson. Thank you. So I'm Andy, or Parogi Powered, if you want to follow me on Twitter with Carnegie Mellon University. That's how we say it in Pittsburgh. So who am I? I'm a blue teamer. I've always been a blue teamer. I started in the steel industry, then spent a few years in the health care and health provider industry in Pittsburgh. I'm currently building and managing a student-oriented, student-led security operations center focused for Carnegie Mellon University. Outside of my work, I'm also the organizer for B-Sides Pittsburgh. This was her eighth year, 500 attendees, also at a casino. So what am I doing? I am running a teaching sock. So in general, the idea is students learn their theory in class. And then they could come to the sock and apply their theory. You could think of it like an apprenticeship. So how do we get people to come out of the university with applied knowledge if they hadn't taken an internship? Or even a lot of these internships now are looking for people with some level of experience. So can we get people ready to get those in-demand internships around the Pittsburgh area and across the world if they want to reach out? A second key part based on my experience, I wanna teach people compliance but teach people good compliance. I've had some bad compliance experiences. Compliance drives a lot of our industry, good or bad. Let's use it for good. And then finally, Carnegie Mellon is a research university. So can we collaborate with our researchers? We've got production data in a production environment. If you're doing research rather than build a lab, why don't you come to us? We'll either provide you sanitized data or if you're trying some sort of active methodology, let's throw it in. Let's see how it runs in our university. So how did I arrive at this student-oriented teaching or collaborative sock? At a previous security operation center, I saw three things. There was this compliance obsession. So you come in, you've got your sock or your SIM set up, it's got all the default rules in. There's just bad alerts, missing alerts. The failed login one is my prime example. There were analysts all day, every day, they were just running through, hey, this account had 5,000 failed logins in five minutes. And then hey, well, don't we have accounts that lock after five, 10 failed logins? Isn't that a mitigation? Isn't this probably just an account that had a password change and the process is just going over and over? I mean, we're not really working security here. We're just manually chasing down IT issues for the rest of the organization. So it was low quality and then why are we doing this? Well, compliance said we had to monitor for failed logins and it was like, well, the goal was to monitor for unauthorized access, not specifically, hey, this account's failing to log in. We've also got mitigations. We could do a lot better than that. On top of that, there was siloing. We're just pounding away at these alerts. Are we trying to get better? Are we talking to the rest of the organization? Kinda, but it was really just more open ticket, closed ticket, they're doing open ticket, closed ticket. We're not getting better. We're sending requests to groups that had bad information. Like, hey, database admin. We saw failed logins on, there's a database server. Hey, this isn't a cluster and you didn't even tell me a table. You just said want server. I've got 30 servers, 500 databases. You're not helping me. You're just giving me a job here. And then finally, career pathing. I love the blue team. I love the sock. That's why I'm in the sock. I love looking at logs all day. It makes me happy. Not everybody likes that. Not everybody wants to go from junior sock analysts to senior sock analysts to like, Sim engineer forensics. They might want to go red team. They might not even want security. Maybe they want to do IT or maybe they want to go into compliance. How do you get them out? So that they don't just leave the organization. They've got valuable institutional knowledge. How do you get them into these other groups? How do you find new people for the sock that may have institutional knowledge within your organization? So that's what I was seeing. And that's what I really wanted to fix was let's fix these three glaring issues. They were just bringing down morale, not an effective use of a sock's time. So one, using compliance for good. I mean, if you're being driven by compliance and it's giving you budget, use it to also leverage to do good security. Compliance wants to help you. You just have to guide it. It's like any toll. I mean, if you set up a firewall, you still have to then set it up. The blinky lights only go so far. So map your good security alerts or good security alert ideas to compliance controls. So for instance, instead of all these failed account lockouts, why don't we look at users who logged in and then stopped at, for instance, your 2FA and my case duo, authentication prompt. Like they authenticated halfway, then they stopped. That's unauthorized access. That is a better alert. Why did they stop? I mean, who doesn't have their 2FA token with them at all times? So then map it to the compliance controls. So if you're doing PCI, like 1024, NS318, it's likely an invalid logical access attempt. Throw it on the alerts. So then the junior analysts, they're seeing compliance. They're seeing compliance applied in a good way. If you've got a manager and they happen to log into the SEM, which hopefully you don't let them do that, keep me out. They could see, oh, why are you doing these alerts? It maps to this compliance control. The language doesn't exactly map to the PCI or NIST or high trust language, but they could see that mapping. Or if compliance checks in, like you've just got this continual, this is how I'm maintaining my compliance methodology going. Every single alert, it's mapped to this compliance control. It explains what we're doing. It explains why we're doing it. Everybody could see why we're doing it. Hey, we're doing it because we want to maintain compliance because we need to make money, but we're also doing it because this is good security. So I use Splunk. I love Splunk. It does a good job. Here's your typical Splunk alert. So like in the top left, you've got your description, you've got additional fields. And then some other stuff, which if you want to see Splunk, head on over to the other room. I'm sure they'll give you a lot of information. So we've got our procedure. We've got, in my case, we have a high level procedure. And then it's a little obfuscated, but there's links to our knowledge management system. I try to do like a graduated procedure. So for somebody who's seen something a bunch of times or has a little more experience, there's the consolidated, this is how you handle the alert in our environment. And then if you need more information, go to the expanded knowledge base. That way the people who kind of know what they're doing aren't just skipping the entire procedure in general and the people who need a little more help, maybe somebody a little more entry level can see everything they need. So calling it out a little more. So if I'm looking at this alert, which this alert was, a user had changed some administrative functioning within Splunk, which is the log management system. We have NIST Control 331, or PCI 1022, actions for users with the route or administrative privilege. Or yeah. So I'm a new user or I'm the manager or I'm Mrs. Compliance Person. I see this alert, it doesn't have the exact language I'm looking for for like that easy one-to-one, but I see why this alert is being handled by the SOC. Also using Splunk, you could do a search to see like give me every alert we've handled for NIST Control 331. They're all tagged in there for easy searching. So this is one component of how I've been managing SOCs so that everybody can see compliance applied as we go for like a continuous compliance. The second part is collaboration. So mentioning those three issues, you know, if you're just coming in, you see your tickets, you close your tickets, you go home for the day. Is that really good? Not really. I had set up and have set up collaboration with different IT groups within the organization. So there's an ongoing review. Hey, you know, this is myself. This is a junior colleague. We're from the SOC. Here's every alert we've handled from the past month for database systems. But this is what we do. We're doing it because of compliance. Are we missing anything? You know your system way better than we'll ever know. Are we doing something that just doesn't make sense? Are we wasting our time here? I bring a junior analyst. It allows cross training. It builds avenues of communication. So like if you've got your junior analyst that kind of has that knack for databases or the person that's mainframe oriented or endpoint oriented, bring them and then they could talk to that other group. Maybe when they tire of the SOC they could go that way. Or somebody from that group will want a shadow on our side. So the goal there is a cross training. They see what we're doing. We see what they're doing. Ideally they're bringing information to us. Like, hey, you know, you were handling all these alerts. I see you never talk about, you know, the hypervisor playing or, hey, I see, you know, for endpoint alerts. You've got all of our window systems but maybe not our mobile systems or not our window systems from this environment. Why is that? So they could point gaps out in our systems. So this is a heavily redacted report because, you know, if you want to see all of this, apply to Carnegie Mellon University. I'll show you everything. So I show, here's, you know, some alerts we monitor for. So back to Duo because we love Duo. Unexpected administrative activity or somebody uses a bypass code to not use their 2FA. How it maps to the various PCI or NIST controls because in my experience, those groups are getting asked the same compliance questions. Audit doesn't just come to the security people. So, you know, here's the searches we're looking at. Do you see anything we're missing? Here's, you know, sample system data we have in Splunk. So here's administrative, some administrative activity for the past week. Does that line up with your administrative activity or if we were showing, you know, user activity, do our numbers match what you see in your panel if we're only seeing, you know, 10,000 Duo users and you're saying, well, we actually have 100,000 or on the other side, you see 10,000. Well, we're only licensed for 5,000. What's going on here? We could catch these issues. We could identify possible gaps, like just looking at this. Hey, there was an admin login error. Do you know why that was? We didn't get an alert because maybe it didn't hit our threshold or maybe we just don't look at admin login errors. The Duo admin may say, oh yeah, that was me or, you know, you really should have contacted us about that. I don't recall logging in on Sunday or, you know, I was fixing an issue but I never log in on Sunday. That's a very specific user anomaly. I really wish you would have contacted me to follow up. That could have been an attacker. So, you know, you fill this out. I've always done the Custom Ones dashboards based on exactly what the admins want, given what they want and ideally, you know, they'll return the favor. I also cover sample notable events which is Splunk Lingo for security events if you're using the enterprise security application. So, you know, here's four heavily redacted events we've recently covered. So you could see like two I had done, two some colleagues had taken care of and in this case I'm only showing you the bypass code usage because that's pretty obvious and not any Carnegie Mellon secret sauce. So you could see like, oh, you know, that first one that for some reason that was a regular occurrence. You probably shouldn't alert on that anymore. You're wasting your time or well, I see what the analysts had handled but they didn't do a good job of it. So like, you know, for the second one if I'm a duo admin or if I'm the sock manager what does non-anomalous login mean? Ah, that's not a very descriptive comment for how this event was handled. So, you know, we're showing this is, these are all the events we in the sock have had eyes on or hands on keyboard, eyes on screen to handle from your system. Did we do it right? Could we have done it better? Are we wasting our time? I've found once you present information in this way other people want to help you because nobody wants to see anybody wasting time for the most part, right? Yeah, it just allowed me to do a much better job of tuning my system, getting rid of junk alerts much faster than if I'm just focused on compliance. So an outcome of this from a standard SIM sock deployment we never detected the penetration test the first year. The red team, they went wild. It wasn't until, you know, they had enterprise admin they were purposely allowed just seeing if we would finally respond to them, did we get an alert? The next year after implementing this we had a detection within hours or days of their activity. They were impressed. Even I was impressed. I didn't think we'd detect them that fast. We didn't stop them, but you know the next year we'll work on orchestration. Continuous improvement, right? There was a significant improvement in sock morale. Everybody on the team, junior through senior we felt like we were doing something we felt like we were contributing. The junior analysts no longer felt like I come in you know I closed my 20, 30 tickets I go home for the day like, okay I've got a couple tickets I gotta take care of throughout the day I'm also working on figuring out means to improve the sock. Whether it's you know, splunked self-tuning talking to other people that you know you were the point person to collaborate on. Everybody was contributing. This was not I'm just a junior analyst I just come in I do my job I go home. This was all hands on deck we're all working on this together. And then finally sock invisibility had improved within the organization. It went from you know where these people in the basement behind a locked door to hey we're you know we've got this intake people were emailing us giving us suggestions hey I saw this in the news or hey by the way I don't know if you noticed our system went down over the weekend I know you probably don't look at you know your incident reports because who does? But you really should have looked at this one or you didn't contact us about something or hey by the way did you see this in the news? We were getting ideas unsolicited from a lot of people. Our monitoring vastly was improving you know every couple weeks somebody was like hey you know by the way this is something you should monitor for from my system or did you see this? Is a example a specific example doing all of this we went from not monitoring our mainframe at all to having an act of monitoring to speaking the same language as the mainframe team they were impressed I was impressed everybody I've talked to has been impressed like how did you go from you've got these mainframe logs but nobody has any idea what they mean to you've got an act of mainframe behavior system set up so that's what I'm doing you're all welcome to join Carnegie Mellon University I'll be here for questions do you have any questions and ways to reach me okay