 The next speaker is an old friend of ours. She spoke for two years ago. Has it been two years? It seemed like it was two years ago. It seemed like yesterday. Cheryl is, what is your title at the Diana Initiative? I'm a founder. Oh, oh, the founder of Diana Initiative. And it is absolutely my pleasure to introduce to you Shelby Fuss. Thank you. Good afternoon, everybody. Thank you so much for coming to my talk. I hope you are having a great con so far. So yes, my daylight name is Cheryl Buswas. You may know me on the Twitter as encrypted. And I'm going to give you a little talk today about how complicated patching can be. So a bit about me. I work as a strategic threat intel analyst with a, quote, major bank, unquote, in Toronto, Canada. We've only got five of them. And the views that I express here today are mine alone and not those of my employer. As Ming said, I am so very proud and honored to be a founder and a member of the Diana Initiative. We're holding our third year. And the conference is running over at the Westin. I hope if you have been able to attend it, you're enjoying it. If not, we'll see you next year. We celebrate women and diversity and security. And it's getting better all the time. OK, so let's get started. Why patching? Well, because it is a pain point we all share. And it is a pain in the ass. And it's also a necessary evil. It's also a highly contentious and a divisive issue for which there are no clear answers. Now, Bruce Potter, if you know him from Shmucon, ranted rather eloquently about this and said, just patch your shit. But it really isn't that simple. It is easy, however, to throw the phrase around and to call out offenders because patching, right? It's just common sense. Why wouldn't you do it? But that right there is why we need to have this talk, because no, it isn't that simple. And playing the blame game is not going to fix the problems here. Because this is about the stuff that we don't know. It's about concerns and considerations that keep stuff running in areas where we don't have familiarity. I'm talking about ICS, medical equipment, legacy systems. If they aren't running, we're not going to have critical infrastructure. We're not going to have things that we take for granted and depend upon. So it's about the real costs of trying to do the right thing, but it goes wrong. Terribly, horribly wrong. And that is the gap between us and business. So I don't know if anybody listens to the Defense of Security podcast here, but on the anniversary of Wanna Cry about a year ago, the hosts, Jerry and Andrew, asked why was Wanna Cry still hitting companies? Now it's 2019 and Wanna Cry, the bug behind it, is still a real threat, because not everything got patched. And that's the reality. Patching gets ranked in with all of the other stuff that enterprises are contending with. And it's not up to us. So I like what Alan has to say here. Patches are important. Complicated and largely misunderstood. Who here would agree with that statement? Yeah, thank you. So we know that patching is, of course, a fundamental best practice. Why aren't we doing it right? Or more importantly, why aren't we doing it at all? And this is the talk, again, that we need to have to address some of the harsh realities, because at every level and in every sector, I've been hearing the same symptoms of what is essentially a worsening condition. So let's start with some pain points. It's messy. Directions are vague or they're missing or they're useless. Do you want misconfigurations? Because that's how you get misconfigurations. It can feel like the worst game of hot potato ever, because everybody's concern is nobody's responsibility. You get talk to until you are out the door completely. And that's, of course, why accountability is an issue. Not my circus, not my monkeys. Thank you. Does this one sound familiar? It seems a lot more like ongoing crisis management. But when your patching strategy is so broken, nobody wants to take on the added pain of process review, because change is hard. Survey says patching sucks, right? Well, let's take a look. These are some of the numbers that came out of a study done by the Ponemon Institute. And realistically, who could commit the resources as they are laid out here? Is anybody here responsible for patching at your organization or involved in it? Are you able to achieve what's laid out here? Is this kind of like a wish list? Yeah, OK. So I have three words. Prioritize, optimize, automate. But that is easier, much easier said than done. Do you remember the good old days when life was simpler and just revolved around patch Tuesday? It was all wrapped up into one neat package? Not any more, because now we have to deal with staggered releases and out-of-band releases. And then there are previews. And then there is supersedance. How do you manage the downtime with the need for uptime? And how do you get all the stakeholders? And there are so many of them. And I don't even want to go there. Informed, I'm working a bank, and signed off. And how do you even know which release you have? So we're damned if we do, because yes, stuff goes wrong. And things break. You just have to ask anybody who went through the joys of what Spectre and Meltdown did to their systems. But Swift on Security has pointed out many, many times that patching needs to be a given. So it often comes down to this. Is the cost for that ounce of prevention one that you can live with? And then we're damned if we don't. And while it may seem like the cure is worse than the disease, WannaCry was a huge wake-up call, not just for those who didn't patch regularly, but for those who actually thought they were. So there are some dangerous assumptions out there about attacks only being targeted. And why would an attacker bother with me? Well, it's not like that. Yes, they're looking for low-hanging fruit and easy pickings. It's not just about targeting. It's about systems that are vulnerable. And everybody has vulnerable systems. So you're not a target, but a victim. Don't be a victim. And based on what we've learned, how can we work with our organizations to make this process a less bitter pill to swallow? OK, WannaCry, Eternal Blue, Apache Struts. Not once, not twice, but three freaking times. And just ask Equifax. And then, of course, let's throw in Meltdown Inspector and Total Meltdown Inspector NG. And now, because who needs to sleep, we've got Blue Keep, right? Is that dumpster fire hot enough for you? But on a serious note, we've learned hard lessons from WannaCry. And that still persists because, of course, not everything is patched. And now, we see it being leveraged frequently as an exploit. And I can say that from working in thread intel and looking at what our adversaries are putting together as more complex, multi-staged attacks. And that raises the issue of how older, unpatched systems in municipal governments, libraries, universities, not just hospitals, are being targeted and shut down by ransomware at an alarming rate. 2019 has seen the return of ransomware at high impact. Municipalities are getting hit faster and harder. And if you haven't been following the news, there are far more virulent and damaging strains out there. City services get shut down for weeks. And when this happens, you don't know what you've got, quite literally, until it's gone. Try closing a house deal when you can't. And with that $17 million price tag attached to it for Atlanta, and Baltimore has an $18 million price tag, more and more municipalities are opting to pay the ransom rather than live with the downtime. And the sad fact is that municipal IT departments are underfunded, they're understaffed, the equipment is typically older, software is outdated, and terribly not patched and up-to-date. So what do you got? Low hanging fruit. And we know from painful experience that patches are not perfect. Things will go wrong, and that is why you need to have a process that is complete and tested in place. Then there are those patches that don't quite get it right. For example, last year was not a banner year for Cisco. Who here uses Cisco equipment? Because you probably know what I'm talking about. Yeah, they had to issue a second patch for their adaptive security appliances to address, quote, further attack vectors and features. It was a hot mess. And that was after issuing an initial patch for what was a critical remote code execution vulnerability in their SSL VPN feature. And there were nation-state adversaries pretty much lined up around the corner helping themselves to this. Well, this talk would not be complete if we didn't address Spectre and Meltdown and speculative side channel vulnerabilities because nobody saw this coming. And that is the whole point. 20 years ago, this was not an issue because we had no idea that this could happen. We did not understand the technology we were making as fully as we thought we did. And so the generation who come got to deal with the unexpected problem. And it is a doozy. OK, so it's still ongoing. And we're going to have to live with this fact every time we innovate and create something to go out there. For as much as we may want to take care in producing security around it, we have to allow for the fact that there is the unknown. And thanks to Spectre and Meltdown, we actually know how severe that unknown can be. We also know it now with Blue Keep. And we've seen it, bless you, with various Linux libraries. It's going to haunt us. Every month brought new drama and breakage as we tried to address these. The patches broke more than they actually fixed. And we need to be prepared to expect more of the same that is going to come. So the question is, what have we learned from these painful lessons that we can carry further and use to build in resiliency? So let's talk about the warnings regarding side-channel speculative execution attacks with this latest CPU doomsday attack. Yes, it is affecting Windows machines running on 64-bit Intel and AMD processors. Intel CPUs made between 2012 and today. That is a huge amount of vulnerability right there. So an attacker could access passwords, sensitive info in the operating system, and the kernel memory. Stuff you're technically not supposed to be able to get to. Assumptions. So SwapGS attacks leverage the SwapGS instructions. It's an under-documented instruction. So it makes a switch between user-owned memory and kernel memory. That's pretty complex, but it's also highly severe. We can't expect everybody to know about this. And yet, the impact is on everybody. And how do we rope everybody into something like this without creating unnecessary fear and chaos around it? Well, apparently Bitdefender was working on this for a year before the announcement came out because it bypasses all known mitigations. I'm going to let that sink in. There's nothing you can do. Holy shit. But we really don't want to run around saying to the company, oh my god, the sky is falling. We're all going to burn down. That's not how we want to solve these kinds of problems, but we have to deal with them. And we have to get management by in to be able to properly and rapidly address this and secure the systems we have in place rather than have it all thrown into the regular patching cycles. There's no guarantee, according to Bitdefender, that an attacker who knows about this, who found it out, hasn't already exploited it. There's a crux of the issue there, too. So Jerry Bell is a connoisseur of Dumpster Fires. He's also the host of Defensive Security Podcast. I get a lot of good stuff out of that one. He summed it up very well. The complexity of patching this thing correctly, and he's referring to Spectrum Meltdown, but I thought it lent itself very well to this as well, is going to provide years of quality post-exploitation privilege escalation. So are we really ready for what can and will go wrong? Question, do you have roll-up rollback points? Do you have backups? And have you tested your backups? I'm going to tell you quite frankly, I have spoken to a lot of places and a lot of companies. I've done security audits. And they don't test their backups. And I'm not going to say anything else about that. So the hard lessons that we have learned, thanks to Spectrum Meltdown, are about the price that we were willing to pay for speed and performance in a by now, pay later arrangement. It's business. It's profit-driven. It's market, driving it to the finish line before anybody else. We didn't read the fine print at the time, and now we got served the bill. And this is going to be a constant refrain that we're going to have to contend with about uptime and innovation. Just saying, but I think we need to question the process when doing things right gets in the way. All right, so how does this extend to the ever-expanding realm of BYOD and mobile policies at workplaces? Here's some statistics that you should mull over. What are your people bringing onto your network? And how do you trust your average users, your office workers, to maintain their technology and keep the patches up to date on what they're putting on your systems? Oh my god, so many things. Why does everything we make have to connect? But it does, because that race to make the next new connectable consumable has left security in the dust. And passwords are default if we're lucky. Hardwired if we aren't. And updating the glitches in the firmware? Well, it's pretty much impossible. So from industrial webcams to SOHO hopelessly Brooklyn routers, Botnets Unite, to, I'm not kidding, the family crockpot. We are talking legions of doom. Sorry, but said it and forget it is definitely not a security best practice. But the fact remains, we keep building stuff we can't fix, especially consumer IoT. It is disposable. So try asking your neighbor to change the default password settings on their cable modem I have. I did not go well. And tell me, how do you patch the firmware on your IoT stuff? So I don't know if anybody's watched the movie Die Hard I rather love the fire sale scene. Well, welcome to the realm of ICS Skata. These are utilities, water, power grid, transportation. It's mission critical stuff that just happens to run our day-to-day lives. And it's more than just that. Operational tech is proprietary. And it goes down hard. Does anybody here follow, if you follow Drago's, if you follow Chris Sestrunk, they share some really great insights into the realities of trying to maintain and secure this environment. And there really is a grim attitude about how this stuff doesn't get patched, but instead, it gets run to failure. 30 and 40-year-old equipment, you don't patch it. You leave it alone until it dies. Gone are the days, though, when these systems were kept separate and secluded from the internet. So are we aware of the operational technology that lives within our enterprise environments, or where we might be at risk from our trusted third parties? Culture change doesn't have to be a Band-Aid solution, but we need to talk about how to make that happen. And it's really just this basic. OT is not the same as IT. And 20 years ago, they just didn't see this coming because security was not something that was baked in the systems. They were pretty much kept segregated. They didn't have to then. But then things changed and migrated, and stuff got connected when that was never supposed to happen. They didn't plan for it. And that's what makes patching these crucial systems that much harder. We have to understand the differences and then work within those specialized requirements. And that attack surface just keeps on growing. Enterprise technology, or ET, is about sensors working over time. And it monitors what we rely on to get business done. So it runs the gamut from, think of refrigeration systems in shipping containers. There are braking and fuel systems in transportation. There are ships and ports. All of this stuff is automated and connected. It relies on technology. And the things that we take for granted have become shiny new targets through technological enhancements. So innovation and automation are emphasized to increase productivity and efficiency, but at the ongoing cost of security. So how the heck do you patch a container? Well, according to an article I read by NullSweep, containers are immutable in the fact that they're not really designed to be patched the way that we think of. Conventional does not apply here. You've got to, instead, you redeploy an updated container and you destroy the old one. Well, guess what? Old habits die hard. And so we're carrying over the same bad attitudes and bad habits of if it ain't broke, don't fix it, or why mess with a good thing to the cloud. And is there any way to identify the containers that could be running vulnerable or out-of-date instances on them that do need to be patched? So the best solution is to try baking, patching into a solid DevOps cycle with automated integrations and deployment, which sounds lovely. I don't know if you can actually make it happen, but it's a starting point. All right, let's talk about something that is really a matter of life and death. And I know there's some great people like Josh Corman and I am the cavalry who have repeatedly brought this or my friend, Yelena, who identifies it from working as a pediatric nurse in the Netherlands. This is specialized critical infrastructure for critical care. Downtime for patching versus downtime because the patch broke something is a very real issue in medical. And from pacemakers to insulin pumps to MRI machines, there are some very hard choices to be made around how do you patch this equipment and maintain it. And it's something that we do not understand in terms of the hospital cycles and how the equipment is used versus the people who are on site using the equipment. So let's talk about learning to understand that perspective better and being able to work with the people involved to make some real headway. The prognosis is grim, but we know it. And when medical tech gets hit with ransomware because the operating system has been unpatched and it's unsupported, well, then the existing policies in place need resuscitation. This is an example of a pacemaker vulnerability. It was updated last year. But look at the vulnerabilities as they're circled. If this was a Windows box or an Active Directory issue, we'd be all over it because there's authentication and encryption. And how about the fact that somebody just standing close to it could exploit it? 80 of England's 236 national health services trusts were infected by ransomware, as well as more than 600 of their national health services organizations, like just general practitioners. The national health services assessed their cybersecurity level of the 200 trusts, and every single trust failed. Some of them had failed to apply crucial patches and WannaCry leveraged those outdated patches. So why are there no patches? It's because the systems cannot take being patched. They're too far out of date, and there's no funding to be able to update the equipment and the software to the level that it needs to be done. There is, in fact, no easy solution here. All right. Does anybody here work in the wonderful world of mainframes? This is the stuff you don't want to have to think about patching because it's high availability. It runs super fast, like 1.1 million transactions per second, and it's used in things like global finance for that reason. There is no such thing as downtime. That's why these are so carefully guarded and relied upon. I started out in a mainframe shop, and I can tell you there are scheduled outages. Why? Because without those scheduled outages, you will have unscheduled outages, and then very, very bad things happen. So not only can you, but you should hack a mainframe. I don't know if anybody knows these two gentlemen. Big Indian Smalls and Soldier Fortran. They're two really great guys. And they have made it their mission to bring this news to the world for good reason because these wonderful behemoths are vulnerable, too. But you don't hear about it. Why? Because IBM likes to keep those details very secret. And if they issue notifications about integrity vulnerabilities to the users, they don't give descriptions. We don't talk about that. Now, I respect these two guys, and they actually offer courses like Evil Mainframe to teach people and grow a new generation of mainframe pen testers and hackers. So what do you do when you cannot patch? Patching is a best practice, but it's not the only one. Let's talk a little bit about misconfiguration. Misconfiguration has been increasingly at the root of some evil recently, as we know. But that is a human vulnerability and not a system one. So network setup, firewall rules, ports left open. Those are human failings. And they're over and above patch management. But when they're done correctly, they can and will block attacks. We've had numerous out-of-bounds patches to fix over recent years. Do they cause more problems than they fix? Anybody have an opinion on that? If you have to do out-of-bounds patching, does it actually create more issues than it solves? And who here understands what we mean by sequential patching? Again, thank you. It's this terminology, but it trips us up when we try to have talks with the C-suite or other departments. Who here has an actual test environment for patching in place to use? Oh, God bless you. Who here wishes that they did? Can't put my hand up. Thank you. We need testing and rollbacks for countermeasures, right? Because stuff will happen, and it will go wrong. You do not test on the prod systems. And everybody who has lived through a bad patch Tuesday and had to roll it back, oh, my gosh, yeah. Oh, my gosh, yeah. And the customers and the phone lines are on fire, yeah. People freak out if they can't access their email and their outlook. And that's just the tip of the iceberg, because if that's gone, then there's worse stuff to follow. Now, Windows has had successions of bad patch Tuesdays. So you know you have to prepare for this eventuality. And when the stuff breaks, customers go down. And that is a direct impact to the bottom line. And then you have the C-suite calling you and breathing down your neck, and it is really, really a bad day. For business, however, this is the hard argument in dollars and risk against patching on anybody's schedule, but their own. Now, many security professionals think that delays in patching are largely due to the fact that we don't have, well, a common view of applications and assets across security and IT teams. So this is about what we're seeing versus what we are not seeing. Because you cannot patch what you don't know about. What endpoints do you see? And what systems and services are catalogued? I can tell you, from experience, the nightmare it is to try and find vulnerable systems without a proper asset management program in place. And the C-suite wants answers now, because they got a notification of a CVE that was rated like 10. And nobody wants to help you because you are bothering them saying, do we have this in the system? Because asset management isn't showing it, but I need to find out, like, stat. And you can't find the answers to the questions when you need them. Well, asset management, real asset management. I know, eh? It's just, call me a dreamer, please. Thanks. Yes. All right, mitigations. There are some good things you can do to get around the fact that you can't patch. For example, virtual patches can and will help you. But to tell you the honest truth, a proper asset management system is going to save your ass time and again. So security is not a band-aid solution. It's a process in line with the needs and the objectives of your organization, not somebody else's, not somebody represented by the Ponymon Institute, but your specific organization. And patching is an integral part of that security process. If we want to move past the pain points, then we need to be able to build our case to the business, to be given the resources, the time, and the acknowledgement required for us to do this well. What it comes down to is, if we want to help management hear that message, we have to write a new prescription for patching. And it's time for a second opinion. That's it. Thank you all so very much for coming. And if you've got any questions, I can take them. Thank you. Any questions? Sure. Okay. So the question is, why didn't I talk about security software architecture review? Unfortunately, it's an area that I haven't worked in, so I don't have the familiarity. It's been kept really segregated where I have had past. Yeah, I understand it, but only at a removed level. And I have heard it talked about when you're doing DevOps and software defined life cycles and being able to incorporate it at that, but I haven't had the practical experience to properly talk. They're not doing it. I like it. I agree. Yes. Okay. Yes, it is a big issue. Thank you very, very much. I appreciate the confidence and the support. Okay. That's my hope with the Dumpster Fires to like. Every slide. Thank you. Yes. Hi. Yes. Certainly. Absolutely. Thank you. Thank you. That's all yours. Thank you very much. Are there any other questions or does anybody need a slide to bring back to rally the team with? I have a fine selection of Dumpster Fires for you. Yes. How do you explain things to management to get by and is it just, is it dollars and cents and risk? It is dollars and cents and it is risk. If you cannot explain it to them in terms of business risk and business risk to that specific organization, you will be sent away. Why? Because I got sent away. I know. Thank you. Yes. That is right. Yeah. Yeah. So the point made is show them how this will impact not just their bottom line, but their own security in their job role. Yeah. Any other questions or comments? All right. Thank you so much. Have a great rest of the con. Thank you. Okay.