 All right. Well now we're going to go to the next presentation. Ideas whose time has come. CVD, SBOM, and SOTA. It's Katie Trimble and Art Mannion in the house. All right, here we go. So let me go ahead and introduce our next speakers. Katie Trimble is Section Chief Vulnerability Management and Coordination for U.S. Cybersecurity and Infrastructure Security Agency. CISA, Department of Homeland Security. So Katie currently serves as Section Chief. She leads the department's primary operations arm for coordination of the responsible disclosure and mitigation of identified cyber vulnerabilities in control systems and enterprise hardware and software used in the 16 critical infrastructure sectors and all levels of U.S. government organizations. We'll expect her to go through each and every level of those today here in the next 30 minutes, so it's almost like speed dating, I think. Art is the vulnerability analysis technical manager at the CERT Coordination Center, part of the Software Engineering Institute at Carnegie Mellon University. He has studied software security and coordinated responsible disclosure efforts. He's used it since joining CERT in 2001 and having gained mild notoriety for saying, don't use Internet Explorer. Hopefully we don't have anybody from Microsoft in the room here. That's okay. There we go. And replace CPU hardware. So it's my pleasure to introduce you to both Katie and Art. We're going to be down here up here, you know. I take it. We have to use the mics for the recording. Yeah, okay. Up, up, up. All right, so after that wonderful intro, I don't feel like I need to actually intro myself, but I will anyway. So I am Katie Trimble. I am responsible for the vulnerability disclosure programs within Department of Homeland Security, CISA. So there are several programs that fall under that you may be familiar with. So I am the responsible party for the MITRE CDE program. I'm on the MITRE CDE board of directors and I am the program manager for that. Also the NIST NVD program, I am the program manager for that. The Carnegie Mellon CERT CC program, I effectively am the sponsor for that, as well as the ICS CERT vulnerability handlers. I do that work as well. So all things vulnerability within Department of Homeland Security. All right. So we're here to talk about vulnerabilities and election systems and some of the ways forward that we see this going. So first, we want to kind of level set and say what is an election system vulnerability? So election systems, I feel like we're very unique, but we're not unique. 16 critical infrastructure sectors. Every sector has vulnerabilities. Everyone has vulnerabilities. They exist in everything all the time. So we categorize in election systems three different kinds of vulnerabilities, which are the same that we categorize in every other sector as well. So we're talking about the software, which would be voting software. We talk about the hardware, which would be voting machines. And then we talk about the digital services, and that's things like websites and voter databases. So when we look at those, a vulnerability is a flaw in the code that allows an adversary or an attacker to do things that was not necessarily intended by the developer. Okay, so what is CVD, coordinated vulnerability disclosure? So this is sort of our flow chart for what we do within the Department of Homeland Security. So the Department of Homeland Security has been doing vulnerability management for technically about 30 years. Well before the department was ever existed, we've been doing vulnerability management through our partners at Carnegie Mellon. Separate to that, MITRECVE has existed for 20 years. We've been doing that. We celebrated our 20th anniversary. This black hat, we had a big party, it was great. So how this actually works within the department, we typically look at product vulnerabilities. Websites are new to us, digital infrastructure is new to us, software and hardware are not new to us. We've been doing this for many, many years. How this works, researchers bring vulnerabilities to us. We rely very heavily on the research community. We are not really able to go out and find vulnerabilities ourselves. There are a lot of legal and liability reasons why we don't do that. But if a researcher brings us a vulnerability, we accept that vulnerability. At no point does that researcher have to give us any PII or identifiable information at all. They don't have to give us their name or email address, contact information, anything. Adjust the description of the vulnerability and a proof of concept if you have it. So we then pass that information and contact the vendor. We go through and we work with the vendor and the researcher to validate the vulnerability, make sure that a schedule is created. We start our, usually it's a 45-day clock at that point. We work with both parties equally to make sure that all the needs are represented. We encourage vendors to create patches. And then once we have that kind of in place, we say, alright, we're all going to publish at the same time. And we do that down to the minute, usually. So the vendor will release their security bulletin and hopefully their patch. The researcher will release their public findings. And we will release a technical advisory or a vulnerability alert. And that's it. We close tickets. We don't hold on to any information, ever. My job's to close tickets. If I have to summarize what I do. So we're not hoarding vulnerabilities. We're not keeping anything close hold. The only time that we may hold on to anything longer than, say, a typical 45-day window is if the vendor needs more time to actually fix that vulnerability. And as long as the vendor is being responsive and it's a reasonable request, we will delay that public disclosure. We see it with industrial control systems a lot. So if you have a safety system and a nuclear power plant, for instance, you don't want to drop that at 45 days. You want to make sure it's going to be fixed before we release that information. Okay. So just to add on a little bit to what Katie said. The two points. The part where there's a report to DHS or to the CERT Coordination Center. We have heard concerns that, is that report anonymous? Can you keep me anonymized when I report? The vendor will figure out who I am and yell at me. Or the government will yell at me or something like that. Just to add on even to that, are folks familiar with the vulnerability equities process? VEP, right? Public documentation for VEP says vulnerabilities reported through this route excluded from the VEP process. So to the extent that you believe that the public documentation says these are not VEP-able vulnerabilities. DHS is public safety. We are public safety. This is to get things fixed. We don't have red team capability or authority or anything like that whatsoever. Also, this is a great process. We fully endorse it and follow it as well. This part here where past to vendor happens is a two-way street. So the vendor has to be able to catch the report and be willing to engage in the rest of this process. And over many years, decades in fact, many vendors have mature processes and can receive and process reports and fix their software. Other vendors are new to that game and are not doing a great job. In some cases, they can't even receive the report. You'll hear lots of stories of, I tried social media. I spammed a couple of email addresses. I made a phone call. We occasionally send certified U.S. mail to officers listed in SEC filings, which usually works. But it's incumbent upon vendors to be able to receive reports and do their part of the problem. All right. So moving on for why do we do this? So we are of the perspective that vulnerabilities should be in the light of day. They should be publicly disclosed. You can't have a conversation if you don't know what you're talking about. So we believe that all vulnerabilities should be tagged with a CVE ID. CVE tags, all they do is provide a unique identifier to say that vulnerability exists so that you and I can have a conversation about what that vulnerability is, and we can move forward from there. It's not a fix database. It's an identification database. So we believe that vulnerability should be in the light of day. We understand that sometimes that individual vulnerability is a painful process. Each disclosure is painful. But for herd immunity, and because we're thinking about the long-term of the ecosystem, these things get easier as you go forward. They get better. They get more routine. By keeping the elephant in the corner, what you're doing is making it more difficult for everybody. When you bring the elephant over, give them a name, give them a drink, have a nice dinner, then things get easier. But if we continue to resist, it's actually bad for everybody. So we strongly believe that. That's the ethos that we live by every single day. We're thinking about climate versus weather. We understand it's uncomfortable. It's super uncomfortable for everybody. With coordinated vulnerability disclosure, there's always the chance. You know, when everyone is 80% happy, that means that people are still 20% unhappy. We recognize that. But we think that for the good of the ecosystem, we need to be the driver that moves everything forward. Okay. So I kind of talked about what is a vulnerability earlier, and it went through that. This whole system exists for product vulnerabilities. And this is actually the flow chart that we use whenever we're working product vulnerabilities within the department. So to date from January 1st until June, I guess we coordinated 7,708 vulnerabilities for disclosure. That's just this year alone. Last year we coordinated 14,000 vulnerabilities in IT systems, and 800 vulnerabilities in ICS systems. So we are working a lot of vulnerabilities all the time. I have about nine federal employees, three contractors, and 50 consultants, and seven states. So we're rolling. What's the QFT running around it? No, you're my 50. No, you're my consultant. Yeah, you're my 50. You're in the seventh state. So this is designed for product vulnerabilities. We're moving into changing our scope a little bit. We typically don't accept website vulnerabilities. Those are considered misconfigurations and something that we don't really... We can't be the internet police. So we provide best practices, and we try to push people to design and run their websites in a more effective way. But we don't do the same process. When it comes to election systems, we've designated that as critical infrastructure, so we've had to expand our scope, and we'll use the same process for vulnerabilities in election systems, election digital infrastructure. The difference is it won't get a CVE at the end. So how this works? Researcher contacts us. It either goes to ICS cert, or it goes to cert CC. We do the collection, the analysis, and the coordination at that time. We contact the vendor. We make sure DHS knows what we're doing and what's going on. We work with both sides through that first process I showed you to make sure that everything isn't agreed upon. The entire top swim lane happens in embargo. Nobody talks about anything until everyone talks about everything. It's like a standoff. Once we all agree and we're all ready to go, we all publish at the same time. Usually it's within minutes of each other. So at that point we reach into the bottom swim lane. So the public disclosure happens. We've already reserved a CVE and we're ready to go on that. The CVE will be populated. The wheel publish an advisory or a technical alert. The vendor publishes their advisory and the researcher publishes their information. That information then flows over, that CVE flows over into the national vulnerability database, which is where it gets the severity score and the additional information. Just to clarify, people ask often what's the difference between NVD and CVE. If you're looking at it, consider CVE to be the dictionary and NVD to be the encyclopedia. It's a bigger description of the same definition. Yeah. Next. All right. So, right. Coordinated disclosure, somewhat known thing. We're heavily involved in this public support, public safety issue. So I'm an election system vendor, a voting machine vendor. How do I do this? I'm being advised I should do this, perhaps. The upshot is, it's been talked about enough of the last three decades, even though people don't agree on exactly how many days you should wait. There are ISO standards for this. You can go buy them, unfortunately. You can get the old one for free, but we're working to make the new ones free anyway. There are international standards that governments around the world put into things like procurement and acquisition language in their countries and say, hey, you want to sell something to me multinational vendor? Great. Follow this and let me know when you do. So, the upshot is, this guidance is actually pretty decent. I know firsthand, so following it is not really a horrible idea. I can't say that for all ISO documents, but this one, this set is pretty good. Furthermore, there is regulation in the U.S. starting. This is in the healthcare medical device sector. The FDA has post market guidance that says, it actually points to the ISOs and says, we're going to coordinate a disclosure. Medical device vendors. So, this one, this idea is here. I mean, it's here. It's in writing. It's in law more or less at this point. We're going to move on to our other two acronyms. SBOM, software bill of materials. People heard about this. It's all the rage in my circles recently, but I'm in a very small circle. So, the way I look at this is very basically, do you know what software you're running? Anyone in this room claim to know what they're running or what they depend on? The answer seems to be no one really does have a clear picture. Yes, there are verticals of knowledge. Apple knows what's in the iPhone and can push an update to you, obviously. Linux distros know what's in their systems. Microsoft Update knows what's in their stuff. But if I have all of those vendors in my hospital or in my voting machine or in my county system, they don't talk to each other. Those verticals are all separate silos. Does your supplier know what's in their stuff? Let's start. Hey, here's a couple of recent examples. Is there Broadcom stuff in there? Do you have Broadcom Wi-Fi and Bluetooth stacks and chips? VxWorks, everyone know what VxWorks is? Popular commercial embedded operating system. Been around for a long time. There were ARMIS. ARMIS researchers released Urgent 11 recently, a nice named vulnerability. A bunch of bugs in VxWorks. Except if you read a little more carefully, it's in this component called IPNet. Made by this vendor that WinRiver Systems owned by Intel WinRiver Systems bought Interpeak who made IPNet in 2006. Any IPNet code floating around pre-2006 could be another stuff. Not VxWorks. All sorts of embedded Linux. And we don't even know the extent of this thing. That is a supply chain, bill of materials problem. Every single... Katie went through the nice CVD process. Think about when you have 120 vendors not just one at the other end of that process. This is a supply chain problem. We don't know what we're running. Everyone always asks how many of these are there and where are they? The answer every time is guesswork and investigation. Every single case is horrible and we don't know what we're doing. You may already be tracking this. If you're trying to do due diligence with intellectual property and licenses and make sure the GPL flows through your code you're already tracking this to some extent. We want you to track it for security issues as well. You need high assurance components. Does the integrity of your voting machine matter? You want to be absolutely sure where the parts came from and the software came from? Knowing where the list of parts is going to help you. Even aside from cybersecurity cleaner supply chain, fewer suppliers, higher quality, there's some deming stuff about this. It's just... it saves you money. Flat out. Do it for that reason alone please and get the secondary benefits. This is Broadcom. This is actually... in the CVD topic there's a very, very long disclosure timeline in this... in this blog post. It's a great read if you want to know more about that. But the topic here is kind of obvious statement. We have no idea who's affected by this. That's still the case today. This is about a year old. Broadcom sort of published fixes for one of their drivers but not the other one. We've asked multiple times. We've done a lot of public research. We cannot tell where the fixed Broadcom open source driver is. Do not know the answer. We have to just retest this and go rebuild Quark's Labs proof of concept and figure it out that way. Yeah, I covered this briefly but here's Wind River. This is Wind River's very good advisory and their documentation says in 2006 we bought these guys. IPNet was available previously. We don't know where it is. Not our fault. They call out a couple of the CVEs that Interpeak standalone is vulnerable. I don't know if these version strings are Interpeak or Wind Rivers. And searching on the internet, like I'm able to do, I found some really old sort of business press release thing that talks about integrity, OSE, Linux, I know what Linux is, VXWorks and others. I don't know what integrity and OSER embedded operating systems. Anyone know, recognize those? They're probably old. This is from back in the day when you sold your networking stacks separately. Didn't come with the operating system? Yes, anyway. We just want this, right? We'll get to was the list correct and were the relationships correct and all that stuff? We just start with what's in there, right? I want to see open SSL, Interpeak, whatever. Oh yeah, that was from Doritos and that's what's in Doritos. And how do I do this? There's an effort going on right now that is going to produce some help. It's not going to solve the problem. This is Department of Commerce in the U.S. There's a small URL here. September 5th is a meeting. Those of us involved in this process are working towards. We're trying to sort out the global answer to how do we build a global software bill of material system to answer the supply chain questions. Anything on supply chain? Okay. Other topic, so secure over-the-air updates. We can kind of cut the over-the-air part. I don't really care if it's over-the-air or Ethernet or even if you plug it into something and update it over USB. You have to do updates because you have vulnerabilities and you want to fix the vulnerabilities. All software these days has to be able to be updated. I realize that's a little bit of a harder story with high integrity systems and embedded systems with long lifetimes, but we're going to need to re-engineer those things over time to get them faster and faster to update. We really want secure over-the-air updates. There's a CWE common weakness enumeration. These are the root causes of bugs. If I download code and run it without checking integrity, something bad can happen. Something bad like, not Petya, $10 billion in losses because the update system was compromised. So, updates. We want secure updates. That means you need to sign things. Key management can be a problem, but still do it. You're centralizing your risk. Still do it, but be aware of this. If those keys go out, big, big, big trouble. We've seen code signing, Microsoft code signing keys have been compromised here and there to sign malware so it passes lots of checks. Shadowhammer was ASUS, much smaller set of attacks than not Petya, but these are examples of the centralized risk of centralized updates. We still want them. Yes, we realize that changing the system means you've maybe even lost your certification and have to re-certify. I realize that's a barrier, but we need to work that out. And rollback, right? I don't want to update. I manage this very small local grocery store in Pittsburgh's IT, and I have some PF-sense firewalls and I'm always hesitant to upgrade. Now I have two firewalls, so got that solved, but you want to rollback. Maybe the election system has two computers on it or has two compact flash cards or whatever. I have any problem with the update? Just reboot to the old one. There's ways to get around this, technically. We have to support faster, faster update cycles. How do I do this? These are NSF-funded things. The update framework, TUF, is a little bit dated, not technically dated, but the work sort of wound down. Uptane is a automobile-specific version of the update framework. Or sorry, Uptane. These are public. They're on the internet. They're on GitHub. You can go. The details of how do I do this securely are in here. They thought about it. They wrote down all the steps you have to think about. So there's public documentation for this. There's other. This is an easy example because it's free and open source. Okay, so we've covered a lot of stuff through a lot of things at you, and it's been a road. So we're conscious of time and also that there was a slip and schedule, but I kind of want to do some closing sort of thoughts here. I want to focus on the positives. So when we look at vulnerabilities, in the beginning we said that we cover 16 critical infrastructure sectors. Some are more mature than others when it comes to coordinated vulnerability disclosure, and that doesn't mean mature like you grown up. That means you've been doing it longer. So we have a lot of positive success stories. Elections infrastructure is so pivotal to every single day of every citizen's lives. And we can't harp on that enough. We're all in this boat together. We have to do things together. There are things that can be done at every single level, from the private citizen to the vendor to the federal government to the state and local government to the municipality. Everyone can do things. There's no single one answer to this. There are many answers and we have to work together to do it. Those answers have to be cross compatible. We have to be close to each other. That's one of the biggest takeaways here. I always try to tell people I'm not on the side of a vendor. I'm not on the side of the researcher. I'm on the side of the taxpayer. I'm on the side of the public. My job is to make sure that we're an unbiased broker in this situation and that we're providing the best information that we can to the asset owner and to the private individual so they can make the best risk-based decision that they can make. We hold to that So we understand that it's uncomfortable. We've had a lot of success. We'd like to point to the medical sector. Up until a couple of years ago, there was almost no medical research done. Then there were a couple of laws that were changed a little bit, and we started to see a lot more medical research done. Initially, it was very painful. People didn't want to see it. People thought, put their thumbs in their ears and didn't want to accept the information. But then afterwards, the medical sector really just started embracing it. They got involved. They said, these vendors said, you know what? We're going to fix these things. We're going to be transparent. We're going to be honest. We're going to be upfront. And that builds trust. And now it's a routine thing. It's so routine it doesn't even make the press. But it's so pivotal that we get to a place where it's normal. So I like to say this quote on the top is mine. The big thing here, well, actually it's Brené Brown, but this is the one that I choose. Courage and truth are never weakness. So sitting and listening never makes you weak. We have to do this together. And if we don't, I can't predict the future. So we're here. We're ready. We're able. If you have a problem, come to us. We want to help. We can be found by the SZA website, so SZA.gov. There's some sections on there. Talk about vulnerability disclosures. You can email us there. We have an intake form that takes you to Carnegie Mellon, so you can submit information there. We do always ask that the researcher contact the vendor directly, but if there are problems in that and the researcher needs assistance, or the vendor needs assistance, or a public partner needs assistance, we're willing to step in on that. We typically take cases that are contentious or multi-vendor. So every single day for us is a new adventure. It's a new adventure in mediation. I don't come from a tech background. I come from a human behavior background. So I feel like I do a lot of moderation and mediation. But the point is that we have to get to a common solution, so that's what we're trying to get to here. The quote I chose and the theme overall for the title is that it's time for these ideas. First of all, they've been around for a while anyway, but it's definitely time in the election system sector. And I recognize that these sort of three technical things about fixing security bugs and giving information about security bugs, they're not going to solve the entire electronic voting machine problem or the election system problem. This is a small piece with a lot of other stuff going on, but Katie mentioned this earlier, right? These are still computers, sometimes connected. They have computer security problems. We know about those. Let's at least do the easy things for the computer security problems we have. We have lots of other problems like the paper ballot stuff and there's many, many other things for election systems, but we can handle these. We know how to do it. At least do this minimum sort of low-hanging fruit stuff and take that piece of security off your plate. So thank you. And yeah, please, we can take a couple questions possibly and we'll be offline also and we exist. Public taxpayer, US funded work service to get these things fixed. We don't take the answer no from a vendor. We just get them fixed. So, all right. Two minutes, so we can take a couple more questions. Who's got questions? If you talk about top MSB, you might have just lost the system because there's so many live waves and not. I don't have the immediate technical answer. There's a lot of detail here, obviously, right? I mean, a slightly higher level technical answer is whatever the code you're delivering is signed and that part works. But to your point, if the USB acts as a keyboard and injects something, right, exactly. So, I do not know the answer. Your work needs to be done there, but this can be done. And, you know, right, evil USB, I don't know, it's physically protect the USB stick, right, maybe better chain of custody there. I'm not an election system super expert. I realize that's probably foolish speak, but signed software, signed firmware at a minimum, right? Sure, you, sir? Yeah. You're not alone, sir. Well, I think for your topics, I mean, so, if you should inquire with me, but, you know, a single Linux distribution has, let's see, my favorite one is 37 implementations of Get Health By Name, and it actually uses about 700. I mean, we sympathize. We do, we sympathize. But, you know, seatbelts were hopelessly naive too, and now we all do it. So, we do sympathize, and we do, we're not underestimating the amount of work and the complexity that exists here. I think that we're all professionals, and we're all professionals in this room. So, I think we fully understand and grasp the difficulty. The point is that we have to change. We can't keep doing the same thing we're doing, and we understand that it's gonna be a hard road, but, you know, the Montreal Accord wasn't signed in a day. You gotta take steps at this little bits at a time. No, no, there's, I absolutely agree with you, but there is progress. I believe it is possible. If you know this guy, Alan Friedman, who I happily proudly leave his email address up here for everyone, he will talk to you forever about it. So, I'll talk to you afterwards too. Thank you, everyone. We can be found and around, so please come ask us questions, talk to us. We're actually hanging out in the aviation village if you ever wanna come talk to us over there. I got like four or five of my staff over there hanging out, so thank you guys.