 As I said, we are recording this on the record, but some folks in the audience may not want to be internet famous, so keep that in mind. And with that, let's start with Harley for an introduction. Hi, everybody. I am Harley Geiger, and I'm a cybersecurity attorney with the law firm Venable LLP. I've been working in cybersecurity law and policy for about 15 years. I've been a Hill staffer, as well as an attorney at the Center for Democracy and Technology, as well as the cybersecurity firm Rapid7. Thank you all so much for being here. Hi, everyone. My name is Mikayla Lee. I work at the White House at the Office of the National Cyber Director as a director on our strategy and research team. And previously have worked at cybersecurity consulting firms, and my background is actually in human rights and looking at the human rights violations that can stem from the use of tech products and services. Hello. Good morning, everyone. My name is Suzanne Schwartz, and I am the director of the Office of Strategic Partnerships and Technology Innovation at FDA's Center for Devices and Radiological Health. It is our center and our office that has spearheaded the efforts around regulating medical devices from a cybersecurity perspective. You'll hear more about that. Hello. My name is Lindsay Forsen. I am the deputy executive director of the National Association of Secretaries of State, or NAS. I handle our cybersecurity and election security profile, among other things, at NAS. I came to NAS by way of academia where I looked at cybersecurity for local election offices from an administrative capacity standpoint at Auburn University. Happy to be here. Hi, everyone. My name is Lauren Zabrik. I am a senior advisor at the Cybersecurity and Infrastructure Security Agency, or CISA. In the Cybersecurity Division. I came to this role from the Belfer Center at the Harvard Kennedy School, where I ran the cyber project for a couple of years. Prior to that, I was at Record Future. Prior to that, I was an intelligence analyst in the government, where I did counterterrorism work. And then prior to that, I was in the military. Thanks. All right. So the way this is going to work, I've been asked to give a 15-minute or so overview just about a high level of US cyber policy. And then I'll turn it over to these amazing colleagues here to talk about their work and their agencies and how they fit into cyber policy and some of their priorities. So we'll get started. So a few years ago, I think for people who have been paying attention to this space, we spent a lot of time as advocates trying to convince the government that cybersecurity was a problem and that we needed to do something about it. We are past that stage of advocacy. The federal government and many of the states also are now aware of cybersecurity. It's front-line news every day. And there is a lot of action that is happening in the space. And so the question is more about, like, what do we do as opposed to should we do something? And so I'm going to give you, start out with sort of a high level, like, just very rough estimate of the numbers to sort of like some of the volume of action on this. So in the United States, the legislative branch, so this is the US House and the US Senate. Right now in the current Congress, which is a two-year Congress, there are 195 bills with cybersecurity in it. This is, again, a very rough measure, but it gives you a sense of it. 13 have already passed one chamber, meaning that one has passed the House or one has passed the Senate, and two have passed both chambers. In the executive branch, oops, in the executive branch, you have 213 notices and proposed rules with cybersecurity in 2023. And here we're just looking at cybersecurity. So it's cutting out a number of other regulations that are sort of adjacent or that matter to cybersecurity. 28 are proposed rules, meaning these are regulations that the US government is considering, and then 30 have become rules. Many, the reason the numbers are different is because some of those came from before 2023. And I don't have great numbers for the states, credit to NCSL, but in 2022, 40 states considered about 250 cybersecurity bills. 24 enacted some form of cybersecurity legislation, and many others are considering it presently, just don't have great full state coverage right now for 2023. So a fair amount of action, especially compared to, say, a decade ago when this was not on the radar of most politicians. So it is a high amount of volume. This doesn't even include things like guidance or non-regulatory documents like guidance, best practices, things like that, standards. There's a high volume of those as well. So I'm going to talk a little bit now about some of the specific sectors that this is happening and starting with critical infrastructure. I think that critical infrastructure is a really important one, particularly this year, not only because it's critical infrastructure, it's the stuff that we depend on for life and safety, but because this is a year where, at the end of 2022 is a year where there's sort of a turning point in terms of the US government's approach to critical infrastructure cybersecurity. The previous model was sort of voluntary risk management. So there are, for many sectors, with the exception maybe of bulk electric or the financial sector, many sectors did not really have a cybersecurity rule requiring them specifically to secure their infrastructure. And so as a result, the approach was, we will work with the sector and we'll come up with a sector-specific plan and the oversight has been criticized in the past as being very loose for those sectors. What is changing now, and this was announced very clearly in the national cyber strategy that was released earlier this year, is that the government no longer views this voluntary model as sufficient. That we have sort of gone, made progress, but have gone about as far as we're going to go with entirely voluntary risk management. And so the emerging model for critical infrastructure, cybersecurity regulation is for the government to use its existing authority in ways that are somewhat creative and ideally in ways that are straightforward, and where there are gaps in cybersecurity regulation for the sectors to then seek out new authorities from Congress to in order to have cybersecurity requirements explicit in regulation. In just the past year, year and a half, we've seen regulations for some sectors that include the water sector, rail, pipelines, aviation, financial. I think we are all broadly anticipating the chemical sector. There's a lot of talk about something coming in the health sector. You'll hear soon about medical devices. So there's been a lot happening on critical infrastructure. This is the top, I believe it is the first priority actually, chronologically, in the national cyber strategy. So this is an area of cyber policy that I think we can expect to roll on for the next few years. Another big one, the IoT security trust mark. They're calling it a trust mark because people just lose it when they hear the word label. They think, oh, it's a sticker on a physical box. No, it is not a sticker on a physical box. It's what we call it the trust mark so people don't get it confused. What we are talking about here is IoT labeling, the nutrition label, this is a mark on connected devices. It can be on the physical packaging. It could be on the online content around the device that shows that the device has some sort of baseline level of security. So it's been talked about for a long time, but what's changed now is that the White House has announced that it is going forward with this. It has White House backing the Federal Communications Commission recently issued a notice of proposed rulemaking to implement this concept. So it is going to march ahead. And again, this is not just a static label, like just a symbol by itself. The vision for it is that it will be layered and dynamic, meaning that as a regular consumer, you can see sort of top line, very basic information. If you are a more sophisticated consumer like the fine people in this room, you can go into a deeper layer to get more technical information, and then very, very deep layers for things like machine-readable information. There is talk of including a QR code on it, which I think is going to be very interesting. And the security content that you have to attest to in order to apply and use the trust mark is going to be voluntary security standards that are issued by NIST. NIST has issued a IoT security baseline, and that's going to be we anticipate the criteria. So although you're not required to use the trust mark, if you use it, but you are falsifying your attestation, you're saying we've met this baseline, but you haven't, then you could be liable for false marketing, false advertising under the FTC Act. And so in that way, there is an enforcement mechanism. I mentioned that the FCC has this notice of proposed rulemaking out now. There is an opportunity for everyone to comment on that. They're soliciting public feedback if you want to get involved. It is widely viewed as a right now focused on consumer IoT, but there is discussion about also applying this concept, if not this exact label, to routers and industrial IoT next. So vulnerability disclosure is a topic that has been discussed a lot at DEF CON. And we are seeing the good kind of vulnerability disclosure proliferate through regulations and policies in the United States and abroad. So when I say the good kind, I mean, we have a channel that is open, and if you submit your vulnerabilities to us, then we will triage them and do something about it and then communicate back out with you, and you can tweet about it or something. What is changing, however, we're seeing this as a regulatory trend, is that there are a growing number of regulations that contemplate requiring disclosure of vulnerabilities to government agencies and on timelines that don't necessarily mean that that vulnerability is going to be patched. So a big one is the EU Cyber Resilience Act. If you have not heard of that law, you will. It will have a GDPR-like effect on security when it passes. I say when because it's going to pass. This is not law yet. They're discussing it sort of at the legislative level, but it is at a very advanced stage. The EU Cyber Resilience Act includes provisions to require organizations, if they have an actively exploited vulnerability, to notify. It's unclear which government agency it'll be, possibly ANISA, possibly the CISRTS, but notify them about that vulnerability within 24 hours. So within 24 hours of finding out that your vulnerability is exploited is not enough time, most likely, to patch that vulnerability. And this is for any software being sold in the EU. The effect of it will be a rolling list of software packages with vulnerabilities that were recently exploited and not likely patched being shared among EU government agencies. In France, there is a similar, a similar, it's now law. So it's a similar law. This was in their military programming legislation. And it too required disclosure of vulnerabilities to the French government, to ANSI actually, within a short period of time, less than three days. And there, it was not even a distinction, I believe, of whether or not the vulnerability was exploited. At this time, there is no distinction between whether the vulnerability is exploited by a criminal or by good-faced security researchers. And there's no distinction as it relates to severity. If you're wondering how organizations are even going to comply with this, you're not alone. China has a somewhat now-famous vulnerability disclosure requirement. This was passed, I believe, last year. It came into effect. And this too requires every organization to have a CBD or vulnerability disclosure policy but to pass on disclosures to the Chinese government. If you are an independent researcher, you have the great choice of disclosing your vulnerabilities that you have discovered to the company that owns it, which then must disclose it to the Chinese government or to the Chinese government. That's it. You don't get to talk about it publicly. So this is a trend that we're seeing worldwide. There is, indeed, in the United States now legislation to update FISMA. FISMA is the law that requires security for government agencies that looks at doing something somewhat similar. I'm not going to spend a lot of time on that because it's still new and I don't expect it to pass, but it is out there. The reason why we think this is a problem quickly, and probably many of you get this, is that if we're disclosing unmitigated vulnerabilities to a lot of government agencies, there are risks. You are depending on the security of that government agency and the EU, under the Cyber Resilience Act model, if it goes to all the agencies they contemplate, it's more than 50 agencies. You're depending on the security of all of them, not to alert adversaries. You are also depending on trust that they will not use these vulnerabilities for surveillance purposes. And independent research, unless it's specifically carved out, it's going to be kicking off this process where I found a vulnerability in your software and now you have to tell the government. That's going to make us that you are not as welcoming to my vulnerability disclosure. So keep an eye out for this. Incident reporting. So here we're talking about separately from vulnerability disclosure, also separate from breach notification. So when we talked about breach notification in the past, that was about loss of personal information. Here, there is a, at this point, fairly well-developed set of laws and requirements to disclose cyber incidents that are independent of whether or not personal information is involved. And so if you have been disrupted in some sort of way, you now, depending on what sector you sit in, may have an obligation to notify regulators. So SIRSIA is a big one that's for critical infrastructure owners and operators. It flows down to their contractors as well. So if they experience a significant incident, they will have to notify SISA. That is not yet in effect, but it is law, but it's at a regulatory stage where the final regulation is not out. Many of you have probably heard in recent days of the Securities and Exchange Commission requirements, the SEC. This is a very big one. This applies to every publicly traded company. And the requirement is such that if you are a publicly traded company and you experience a material cybersecurity incident, you must disclose this effectively publicly within four days of determining that this is a material incident. So you disclose it through your 8K form, which then goes on to the EDGAR system, and that is public. So that is, again, four days, not clear whether or not your incident is going to be contained or mitigated at that time. Now there's a bunch of others. So financial sector, FCC, the point is for every sectoral regulator that's out there, they want a piece of this action and they're all putting out incident reporting requirements. There's not a general one for the United States, but the patchwork does spread pretty broad. And when I say patchwork, we recognize that there's a patchwork. The US government has talked openly about it. Cercía itself has a part of it in the law that requires it to have a incident reporting harmonization council. But harmonization is also just a fancy word. What do we really mean by that? What is the outcome going to be? And especially, how will that process affect independent agencies like the SEC that don't necessarily have to listen? How will it affect things like the Cyber Resilience Act, non-US organizations? And then lastly, the talk about incident reporting is fine, transparency is good, but we had transparency with data breach notification, it didn't fix privacy. Cyber incident reporting is not going to fix cybersecurity. It'll be a big compliance burden and it will help transparency, it will help some things, but it is not the same thing as resilience. So beyond just incident reporting, there is this, right now, there is a drive within the US governments driven by the White House to harmonize cybersecurity regulations more broadly, not just about cyber incidents. It is almost like cyber should have a policy. And so the Office of the National Coordinator for Cyber has issued a regulatory harmonization initiative and is asking for public feedback right now. That is a request for information that is present and anyone here can comment on it. But the idea is to try to, as much as possible, get our cybersecurity requirements to be more consistent and ideally to be based on a level of a baseline. And that baseline right now looks like probably it will be the performance goals that SZA has issued, but it may differ by sector. Streamlining regulations that are redundant so that you don't have to comply with the same thing twice and issue two incidents, for example, and ideally cutting across not just the federal sectors, but also states as well. Here again, however, you have the same problem as harmonization with incident reporting. How will it affect independent agencies? How will they work with states? And many of the problems are outside of the United States as far as regulatory compliance. So it's a significant challenge, but this is one that is ongoing right now. We're gonna hear more about harmonization for the rest of the year. And then artificial intelligence. It's so hot right now, but if it's already illegal to hack, does it need to be more illegal to hack with artificial intelligence? That there's a lot that's happening and I wanna kick it briefly to my killer colleague, Heather West, who follows this space very closely to talk about some of what's happening in artificial intelligence. I love that I get to jump up here for just a hot second. As Harley said, artificial intelligence is all over the place. Is it new? Is it not? Who knows? I mean, we kinda know. But it's worth walking through a little bit of what various governments are thinking about here because it could be really, really impactful, particularly at the intersection of these artificial intelligence systems and cybersecurity. One of the things that I think is a little bit underappreciated by some policy makers is how important artificial intelligence is for cybersecurity. It's kind of a bedrock to a lot of these technologies. And yet, there's a lot of hand-waving happening. So you've probably heard of a number of administration actions, one of which is kind of being operationalized next door in the Generative AI Red Teaming Village, or AI Village. It's actually pretty cool. I had a chance to poke around a little bit. It's worth taking a look. But they're letting at scale people go in and try to break these systems. That's really cool. And these companies have committed to working together and working with the government to try to make sure that AI systems are secure and that we know how to use AI for security, which is pretty cool. Earlier this week, the DARPA AI Challenge was announced of funding over the next few years to think about security. Like it's really apparent to me that the government has realized that focusing on security for AI is one of the places that we know how to jump into right now. There are a lot of other pieces of this puzzle, and I have the sneaking suspicion that the other folks up here are gonna talk about this more than a little bit. So I won't go too far into it, but it's worth looking at and differentiating what's new and what's not. It's so hot right now. I think legislators have a rubber stamp that says, now, with AI. There are a ton of congressional and state proposals. They are all variously interesting. Some of them have to do with security. Some of them have to do with fairness and bias. And for example, there's a bunch of state proposals around hiring bias. That's incredibly important. It's really interesting. There's a lot of action internationally. The AI Act in the EU is in trial log right now. They're final negotiations. One of the most notable things there is how it interacts with the Cyber Resilience Act that Harley mentioned, the Product Liability Directive, the AI Liability Directive, thinking about what it means for an AI product to be defective and what the burden of proof is to think about that. And defective can mean security. There are over 50 national strategies out there from various countries thinking about how they're gonna approach AI and how they're gonna approach security for AI. And there's a ton of private sector initiatives. I personally am spending a bunch of time with my colleagues at Venable, thinking about what it means to be utilizing, developing AI well, responsibly, securely. So it is incredibly hot. I can't wait to hear what everybody else is thinking about on this front. Thanks, Harley. Thank you, Heather. So because this is DEF CON, that was amazing. Not now. Okay, that was the right choice. So because this is DEF CON, I also wanna talk about some of the hacker law priorities and then we're rounding out the end of it before I turn it over. So in the past, we've talked at DEF CON quite a lot about the Computer Fraud and Abuse Act. And I know I'm skipping out of order here seemingly, but hear me out. So the Computer Fraud and Abuse Act has had a lot of changes over the past couple of years. Through the Supreme Court, the Van Buren case, as well as the Department of Justice charging policy on CFAA. Point is the CFAA is now more narrow than it used to be. It has taken some of the heat off of some forms of research. That is not the same for the states. State laws were not touched by the Department of Justice's charging policy. It's not touched by the Supreme Court ruling on CFAA because the CFAA is a federal law. The things that people have, that this community has historically feared and complained about as it relates to the over breadth of the CFAA, it's actually more present in the states. So anyone here that, you know, from wherever state you are from, if you look up your state and then computer crime law, you'll be able to read it for yourself, but they have not evolved. And so I just wanna emphasize that as a hacker law priority that I think this is where the action ought to be for this community that has, you know, historically done so well at speaking up for itself. DMCA section 12.01. This is arguably the second most important hacking law and at the federal level. That law requires you to get the authorization of the software copyright owner before circumventing a technological protection measure to software. Even if you own the copy of the software, because it is a copyright law, that too has gotten a lot better. So there are now exceptions for security researchers there. Where there is not an exception though, is in what they call trafficking. So this is making your tools available to the public in some form, whether you're doing it for pay or just making it available for free. Section 12.01 forbids this and there is no exception. Every security company is just sort of whistling past this regulation. It's not often enforced, but it has been enforced. It is not just a criminal law. It is also a private law, meaning that it's subject to private lawsuits. And we have seen a private lawsuit based on this in the past. CFAA, I mentioned some of the evolution of it. The DOJ's charging policy was very significant insofar as the Department of Justice says, if you are doing good face security research, we will by default sort of decline the prosecute you as long as you're not doing something worse. That's prosecution. The CFAA, like the DMCA, is not just a criminal statute, it's also a private statute, meaning that you can be sued privately under the CFAA. That has not changed. We're seeing a lot of bright spots in CBD and vulnerability disclosure policies, but like I said, I think we're getting further left of boom to the point where we are now being required to, in some circumstances, disclose our vulnerabilities in a mandatory way to government agencies. And that is our chief concern as it relates to the way that this is evolving globally. This is my last slide. And then I'll turn it over. And that is, if you wanna advocate on these things, this is part of what we hope you'll come away with. If you wanna advocate on this, it is easy to just appreciate the problem, to say, well, CFAA is too broad or states haven't evolved. What is harder is finding the right officials to describe your problem to. Some of them are up here. Some of them are at DEF CON. But finding the right official to describe that problem is really the next step. And then even more difficult, substantially more difficult, is actually coming up with a solution to the problem that you have identified. And that is what we ask our government officials to do. That is what makes their job pretty hard. And coming up with a solution that actually works for the world is even more difficult because it's not just a world that cares about security out there. We have other things like business or public safety and other things like that that all must be balanced, not just about hacker rights. And then finally, making actual tangible progress on your solution that's where it really begins. Thank you so much. I hope you appreciated that overview. And now I wanna turn it over to my colleagues at the White House here to talk about what they do. And that's it. Thank you. Thanks so much, Harley. That was an incredibly useful background for I think what the rest of us on the panel are gonna talk about, which is what do each of our organizations do within this context? And how do we do it? The hope is that through this session you'll have a better understanding of what each of our offices or organizations do as it relates to cyber policy, the kinds of levers and tools that we have in our toolbox. And hopefully as well, the kinds of levers and tools that you have in your toolbox to engage in this policy process. The one thing that I wanna say up front is that the challenge with doing a Cyber Policy 101 session is that sometimes they're very, very different understandings of what the word cyber means. And that is, I think, understandable. There are people who think of the word cyber and take that to mean offensive tools and approaches to attack adversaries. They might think that it means enterprise IT networks and securing those or the defense of critical infrastructure. To some people, cyber means and a job classification that helps the federal government identify the types of technical skills that we need within our departments and agencies. And there's a really wide range of topics here. I think over the course of the rest of the session we might use the same terms to mean different things, but hopefully we can help parse the differences and the nuances within the way that we think about cyber policy. So that's just a little bit of framing up front. For those of you who came in a little bit late, I work at the Office of the National Cyber Director at the White House. And we are an incredibly new office, a new component in the Executive Office of the President, established only a few years ago, but it was established through the recognition because the recognition that historically it's been very difficult to understand how to establish some level of cohesion across the federal government to address cybersecurity issues. ONCB sits at the intersection of all those kind of different definitions of cyber that I mentioned up top. And it makes it incredibly useful to bring some of those elements together, but also challenging because that world is very complex. And so one of the ways that we, one of the things that we hope to do is to engage deeply with the private sector, with civil society to better understand that complexity and apply that to the way that we think about policy making. So ONCB was established in 2021 by Congress. It took us a little while to get off the ground because we didn't have money at that point. And so appropriation only came a little bit later, but we're now about 80 people strong and have been working on a number of different issues that I'll kind of lay out for you all today. I'll talk a little bit about the National Cybersecurity Strategy, which was released in March that Harley mentioned. And then all of the work that we've done since then to implement that strategy. I think one of the sometimes the challenge of a strategy or a strategy document is that it's words on a page that could turn into words on a shelf unless there is sufficient buy-in within the right communities, both within the inter-agency, other departments and agencies across the federal government, or buy-in from other stakeholders in the private sector or in civil society. And our hope is to make sure that this is not a strategy that sits on the shelf, but that it is implemented and that it is funded, that we're able to align our budget to our aspirations over the course of this implementation. So the National Cybersecurity Strategy that was signed by the president and released in March of this year laid out two fundamental shifts. And if you're familiar with this already, that's great. For those of you who haven't had a chance to take a look at it, I definitely encourage you to read it because these two fundamental shifts are important to the way that we're thinking about the vision for cyberspace in this decisive decade, the things that we hope to accomplish in the next decade and in years to come as well. Harley talked a little bit about regulation. That's part of the first shift that we established in the National Cybersecurity Strategy. And that shift is about shifting the burden, understanding that there needs to be a shift in terms of who is responsible for managing cybersecurity risk. And we wanna move that away from the end user, the consumer and move that up towards those who are most capable, who are the biggest and potentially most responsible entities for that cybersecurity risk. Now that involves things like regulation and regulatory harmonization. And also we also talk in the strategy about software liability model as well. But this is something that we want to change because as it stands now, we're asking elementary schools and rural hospitals to go toe-to-toe with transnational criminal organizations. And not only is that not fair, it's also not effective. We want to change that model so that we can address some of these problems at the part of the chain where they are most able to be addressed. So that's the first big shift. The other big shift is thinking about how we incentivize long-term investments in cybersecurity. What we want is a world in which when private sector entities or public sector entities are fighting with trade-offs, the trade-offs that they have to make between the perhaps easy but temporary fix versus the more difficult but enduring fix that the enduring solution is the one that should be the right choice. We want to make that an easier choice. We want to help shape market forces so that that choice is the one that people automatically go to. And sometimes that's difficult. I think one example that I'll raise is the kinds of infrastructure investments that we're making right now. The Biden-Harris administration has signed legislation to pour millions and millions of dollars into infrastructure, into clean energy infrastructure as well. And one of the potential risks is that money rolls out, that infrastructure is built without sufficient understanding of the potential cybersecurity risks but building some of those cybersecurity mitigations in at the beginning. And if we look five years, 10 years, 20 years down the line, are we going to run the risk of vulnerabilities in that infrastructure? Are we going to have to rip and replace, which may cost exorbitantly more? How do we make sure that we are doing things in a way that is secure at the beginning? And we're thinking about that kind of secure by design mentality early on. So incentivizing those long-term investments in cybersecurity is the other big shift that we talk about in the national cybersecurity strategy. The implementation of that strategy is something that we've been focused on for the last couple of months. And in fact, a month ago, the implementation plan was released publicly. This doesn't happen all that often, but this is something that we wanted to make sure happen both for transparency's sake as well as for accountability's sake. We think it's important to put the implementation plan out there not only for us to hold ourselves accountable, but for you all to hold us accountable as well. You can find that implementation plan, pull it apart, ask questions about it. There are things that are already underway and are already being implemented. One of the pieces that Harley mentioned is the request for information on regulatory harmonization. We know that regulatory harmonization is a big thorny topic that is going to take years to unpack and understand. And we know as well that engagement with the private sector is necessary. It is necessary for us to be able to do that effectively. So that's out and live now. There's also one that was just released yesterday on open source software. And there's a request for information that we would love to hear from you all in this room as well as many other people who are here at DEF CON about the ways in which we can think about security of open source software and memory safe languages. So those are some of the things that we've been doing recently. I'll hit on two more and then pass it off to the other panelists here. One thing that we released last week is the National Cyber Workforce and Education Strategy. And I think that this is incredibly important because it illustrates the kind of missing piece of the puzzle that's often under-emphasized or overlooked. And that is the people, the talent, the expertise that is needed to do all of the things that we hope to do. To meet our aspirations for cyberspace. And that people part is incredibly important. Finding training, getting people into jobs where they can help defend critical infrastructure, where they can think about the issues that arise with ITOT conversions, where they can be in positions to help make decisions on how we think about securing our nation's digital ecosystem. Those are things that require talented people and the National Cyber and Workforce Education Strategy is the US government's plan to address those shortfalls and how to train up the next generation of digital workforce or our digital workforce. And then finally, the last thing that I'll mention is the kind of values alignment piece. So the National Cyber Security Strategy talks about defensibility, it talks about resilience, but it also talks about values alignment. And not something that is often difficult to define and shape. One of the things that we did around the Summit for Democracy back in March is work with the State Department to put out a call to action on the development and implementation of censorship circumvention tools. So thinking about the ways in which technologies can be helpful in circumventing the authoritarian censorship that happens in other countries in order to make the internet more accessible, more open, and more resilient. Those are the types of things that we've been working on in the last couple of months. Some of my colleagues that are here at DEF CON have sessions on those topics, including on secure by design and open source software and anti-censorship and definitely encourage you to find any one of us. We tend to be wearing these shiny badges and ask us questions, give us feedback. We definitely want to hear from you all in this room. And I'll pass it over. All right, thank you. And I want to start off by saying, Harley, a lot of your overview provided a lot of touch points for us at FDA reflective of the journey that we've had over the past decade and a little bit longer. And let me first also mention that I'm going to have to leave a little bit early with my team. My team members are actually here in the audience as well, but we're going to have to hop over to Biohacking Village for some activities over there. I think the best place for me to start, especially since we're here at DEF CON right now, is in reflecting back to what some of you who are in the audience here in the room and possibly those of you who are listening, recall with respect to demos that have taken place over the years on the stage at Black Hat and DEF CON, showing the potential for exploiting vulnerabilities in medical devices that can result in significant, I would say, sometimes catastrophic patient harm. And those types of demonstrations that have been done by White Hat hackers, by security, independent security researchers were very much a call to action. I would say, for the agency, if we dial back all those years, 10 years plus when those demos occurred, the state of our community, the health care ecosystem, was such that manufacturers were issuing cease and desist letters to those security researchers and were obviously coming to the FDA as well, complaining that researchers shouldn't be doing any of those activities. They're putting patients in harm's way. We're creating a crisis of confidence in the public with respect to the use of these devices. And if you consider, that's where we were 10 plus years ago. And what we at the agency needed to do to bring us to the present is a very, very hard hill or mountain to climb, I would say. And certainly we did not do that alone. Certainly we did that with many partners, many collaborators across the entirety of the medical device ecosystem. And one of the places where we started was with this community, with the hacker community, with security researchers, with really trying to get a better understanding of what was happening with respect to vulnerabilities as they were identified and what type of potential risks they expose or they bring up towards the use of these devices. I want to say very clearly, our stance at FDA is that we cannot have a safe device unless that device is cyber secure. I'll say that again. Our position is that for a device to be safe for patient use, a device obviously that has capabilities of connection or has software within it, that device has to have basic security by design and capabilities of being able to be protected from a security perspective. And moving the ecosystem from a culture of this can never happen, these vulnerabilities are hypothetical. They are theoretical just because they're being shown on the stage doesn't mean that in reality, in a real use environment, anything bad could ever happen. Moving people away from that mindset, from that culture, has been a really substantial journey over the past bunch of years, over the past decade, essentially. But we've gotten there through a lot of the work that we have done, again, with groups such as I am the cavalry and others who have been huge proponents and, as Harley talked about in his slides, the advocacy aspect of what an audience, what a coalition, can bring to a particular policy area where change is very much needed. So just for openers, I want you to make that very clear. So FDA, what is our mission? Again, we are there to protect and promote the public health. And we do that through regulatory activities, including oversight of medical products in total. Our office, the office that I represent within the Center for Devices and Radiological Health, has responsibility for medical devices. And if you were to go back and look at the statutory language for which we abide by or which we operate under, it has always been reasonable assurance of safety and effectiveness. We've never had, up until recently, so I'll get to the really great news, but we never had explicit authorities around cybersecurity, if you would just go through all of our regulations, all of the food and drug law. The word cybersecurity, up until very recently, has never appeared. So you might say it was sort of a conundrum back in 2013 when we really started to deal with these vulnerabilities being brought forward to us by security researchers as to how are we going to be able to appropriately deal with this within our regulatory schema, our framework. And we did have to be pretty creative about that. Harley talked also about if you don't have existing or explicit authorities, how might one use existing authorities in order to be able to issue policy and, again, move or drive towards advancing or strengthening cybersecurity, which is what we did through a lot of our initial policy documents that start from the pre-market side. And what we mean by pre-market is manufacturers within the medical device industry need to come to the agency with a submission, with a package of material in which they are providing data or evidence that supports that reasonable assurance of safety and effectiveness. And in order for that device to be allowed, or what we call authorized, to go on to the market to be used, to be sold, to be enabled by clinicians and by patients, it has to go through that FDA process. So back in 2013-14, we issued that very first pre-market guidance laying out basic tenets of what we expect manufacturers to do with respect to security by design. Again, we don't have the explicit authority, so what we used were other existing regulations, the quality system regulation for one as a hook, if you will, in terms of manufacturers needing to abide by that existing regulation in order to meet design controls and other aspects of security and software that would allow for those devices to go on the market. We continued along in this journey a lot of it through, again, meeting with, engaging with the ecosystem. A lot of what we've done has been multi-stakeholder engagement with patients, with health care delivery organizations and clinicians, with the independent security researcher community, and obviously with industry medical device manufacturers themselves, as well as now we have an entire stakeholder group of third party security vendors as well. And in this period of time of really trying to change the culture within the health care sector in terms of recognizing that these are emerging threats, that the health care sector has a very significant exposed attack surface that is not being addressed. And certainly from where we sit in our role on the medical device side, this is an area where we need to be having manufacturers step up to the plate and deal with devices. So much of what we learned over this period of time as we did this engagement, as we developed these collaborations, included interactions with other government agencies, included a lot of, yes, what were voluntary efforts through prior executive orders and prior administrations. But these were important in terms of really starting to, again, move the needle towards what our expectations were. In 2016, we released a post-market guidance, which meant that, yes, while devices as they go onto the market need to have security built into them, we know that vulnerabilities are going to emerge over the lifecycle of that device. They're going to be identified. And it's not a one undone, one undone once the device goes on the market. But what is the manufacturer's responsibility in keeping that device and maintaining that device from a risk management, from a cybersecurity perspective throughout that lifecycle? Because of a lot of the work that we did with the community, you had things included within our guidance such as coordinated vulnerability disclosures, that we had an expectation that manufacturers would need to have, first of all, processes and policy in place for ingesting vulnerability information that came to them, regardless of who provided that information, that they needed to do the necessary assessment and analysis of that vulnerability to determine what type of risk was involved. And if that risk certainly was of a critical nature, what we called uncontrolled vulnerabilities, that the manufacturers absolutely need to take action to reduce that risk through various efforts of remediation and to disclose that to the public in a coordinated manner with FDA, with what's now CISA as well, so that there was really a very aligned effort towards providing information and reducing the potential for a crisis of confidence among patients, family members who rely on these devices, be it pacemakers, implantable devices, insulin, infusion pumps, all of those types of devices, for which we've also issued over the years a lot of safety communications, as well as part of this disclosure process. So you had the pre-market guidance. We've had the post-market guidance. We've also promised all along that this is an evolution within the health care space. We're going to need to iterate. What we put out in 2014 or 2016 is not going to be sufficient to meet the mark of the future, of the present and the future, and that we would continue to address where there's a need to really, again, raise the bar based upon what we were learning. And so in 2018, we issued an updated pre-market guidance at the time. And that, in comparison to the pre-market in 2014, which was just nearly nine pages, just basic foundations, the 2018 one was somewhere as close to 20 pages. This is already a lot more meat, a lot more substance within that guidance. Here's the thing. By 2018, from all we were learning and what we were seeing, it became very clear to us as well that, yes, there are certain, there's a lot that we can do through guidance. However, there are certain areas where to really advance cybersecurity in medical devices, we were going to need additional authorities, very specific cybersecurity authorities that we really did not have. And we put out what was called a Medical Device Safety Action Plan. And one of those sections in there was focused on cybersecurity, where we already signaled to the public that we were considering seeking additional authorities for medical device cybersecurity. Included within that was requiring or mandating coordinated vulnerability disclosure, mandating a software bill of materials, mandating that as part of the pre-market submission that manufacturers need to submit to the agency, they need to provide the appropriate evidence that patching or updating a device that's out there is going to be safe and not going to affect the performance of that device. And while we signaled that in 2018, we also provided proposals, legislative proposals, what we call A19s is the common lingo for that, where we made that case in justification for why we've got to close that gap. It wasn't about taking a regulatory hammer and bluntly putting down a request for regulations. It's really more like carving out with a scalpel those specific areas where we're going to need to drive further and we need more very specific regulatory authority. And over the past several years, we've been pushing and been persistent in terms of pushing this particular area, working with many partners on the health care provider side, with the security researcher community, with patients, with others in order to make that case. I'm going to speed up to where we are at present, where finally we have received in December 29th of 2022 to be specific, the omnibus, which passed and included within it was the patch act, which provides multiple provisions requiring manufacturers in order to abide by a certain key elements in submitting their pre-market applications to the agency, as well as the expectation for what they're going to need to do for the lifetime of that device, including coordinated vulnerability disclosure policies and processes with a time frame expected for not merely disclosing, but patching that vulnerability, depending upon its criticality. In addition to that, mandating a spawn. That manufacturers need to submit the software bill of materials to the agency. That statute has gone into implementation 90 days later, which was March 29th of this year. And at present, so I would say to you that we also issued what's called a refuse to accept policy guidance on March 29th, which essentially states that as of October 1st of this year, we have something called refuse to accept, which means that if a submission comes in to the agency that does not have the necessary elements for cybersecurity required for pre-market review, then that submission would be immediately rejected from going into, again, the process of review. And it returns back to the sponsor, the manufacturer, or it turns back to the manufacturer, and the manufacturer's gonna lose time as a result in being able to get that product to the market until they provide the information for which that submission is accepted. We've been working very much interactively with manufacturers over the years and even over this period of time since the statute has gone into effect so that they can understand better what it is that they need to provide to the agency. But this is a real inflection point for us and we're really excited to be at this stage right now. It doesn't solve all of the problems with regard to medical devices. It doesn't directly address legacy and we could have a whole separate discussion around those devices out there that are legacy equipment for which that's really more of a whole of community type of an approach towards dealing with in addition to FDA. But certainly with respect to new medical devices that are going to be going on the market, this is a game changer. And we're very much leaning into this in terms of really making sure that we capitalize in full upon these new provisions, these new authorities that we were given. I'm gonna stop right there and turn it over to you. Great, thank you. I know we've had a lot of new folks come in the room so just reintroducing myself. My name is Lindsay Fourson. I'm with the National Association of Secretaries of State. I know this is a lot of policy talk for Friday morning in Vegas. And I know the secretaries have a very specific role and we're in a cyber 101, cyber policy 101 session. So I'm not gonna get too far into the weeds but I do wanna touch on who is NAS, who are our members, what do we do, how they fit kind of within the broader state cyber policy environment and just some priorities of secretaries and what they've been working on in the cyberspace. But I'll get through those things relatively quickly. So NAS is a membership association. We represent the secretaries of state across the country. Their profile has increased in recent years, as you might know. We've gone from when I tell people what I do, they think I work for the US State Department to when I tell people what I do, they wanna talk election security. And so that's been a bit of a transition. So secretaries of state, they're in almost every state in the states where they aren't. The other position that's closest, Lieutenant Governor may be our member of NAS and often they are. They do a lot more than just elections actually, as you may or may not know. Every single secretary of state role is a little bit different from the other states. 40 of them service their state chief election official, almost all of them run business, what we call business services at NAS, for their states registering, renewing businesses, providing resources to small businesses, those sorts of things. A lot of them are the main record keeper and the archivist is within the secretary's office. A lot of them do international relations, that's our one tie to the US State Department, but their roles and responsibilities very drastically. NAS, our role in the cyberspace and in all the spaces to be a liaison between the states and the federal government or other partners. We work very closely with CISO, we work with them almost every day on whatever cyber security issues the members are facing or looking for support from the federal government or looking for less support from the federal government. We also work with FBI Cyber and the intelligence community, the election assistance commission and our kind of that conduitive information sharing back and forth. We also, our main role and what we are the most proud of our work is sharing the best ideas and the most serious lessons learned across the states with each other and I think that that information sharing role has really increased the collective resilience of secretary of state offices and the election, the security of our elections in this country, just the sharing from secretary of state office to secretary of state office, the building of relationships across their CIOs and CISOs and that's really the role that is most important to us at NAS. So secretaries in most states, I'm moving on to point number two, kind of how they fit in that state cyber policy environment. First and most importantly probably to this discussion, they are primarily administrators, more so than they are policy makers, right? They work with the state legislature and they sometimes advocate for policy positions but ultimately their role is administering the law and so that's an important point but in most states the secretary of state is an independently elected constitutional officer which means they don't fall under the governor, they are a separate statewide elected official. What that means is that they have to do a lot of coordination in the state with those who do work under the governor, right? The Homeland Security Advisor, the office of the state chief information officer, the emergency management department, whatever it may be called in the state, the national guard, the governor's office, the state legislature, right? So there's a lot of interest state coordination that secretaries of state do and that role has, for them has increased dramatically since the 2016 election and designation of elections as critical infrastructure in 2017. And so I'll just make that point and can address any questions on that one later and I'll move on to the third point which is kind of in the cyberspace what have been like major trends for my members and major priorities. So as you may know election administration gets much more complicated than the 50 states run their own elections in that most of the administration happens at the local government level so we're talking eight to 10,000 local government election jurisdictions and so in the cyber arena a big part of the role of the secretary of state is providing support to those local government administrators so in some places municipalities and townships in most places it's counties and so a massive push among secretary of state offices has been establishing what most of them call cyber navigator programs where they have, they're employing folks at the state level to go out and be boots on the grounds and the counties and the municipalities to work with folks to raise their resilience and part of that effort has been maturing the risk assessment model for states using actual frameworks that are out there to assess and prioritize risk management for the local governments. Again usually just like our relationship with the federal government is still largely voluntary on that's a voluntary information and support structure with states to locals it's generally the same, right? Secretaries of state do not have, most of them do not have much authority over their local election officials or local governments and through the cyber navigator programs they've really learned that most local election offices get that IT and cybersecurity support from somewhere else in the county or municipal government or even from a managed service provider and so it's a very complex environment in which they work but they have, there are some secretaries who can impose requirements a lot of others use federal grant funding to incentivize certain requirements and that's been a big move in recent years is to tie specific requirements, very basic foundational cyber requirements to federal grant funds for election security as they pass them down to local governments. Besides working with their locals other major areas have been cyber incident reporting for elections specifically but a lot of times within broader state laws from locals to the state requiring some cyber incident reporting and or from vendors imposing or working with their, either imposing requirements on their vendors through contracts or working with their vendors to come to agreements on things like vendors sharing vulnerability assessments, vendors establishing coordinator vulnerability disclosure policies and things like that. And then the other point that I would say has been a major priority and I could go on but I'll stick with this one is kind of maturing the cybersecurity teams in the secretary of state offices and so realizing that that IT director handling cybersecurity is not working anymore it's the role is bigger and so a lot of secretary of state offices are hiring CISOs, having them be a part of the senior leadership team and really building the team under that. And I said that was the last one but one more and a plug for our later session is just establishing vulnerability disclosure policies and working with the hacker community. We've come a long way, we're not quite 10 years into the relationship like on the medical side elections isn't but a lot of secretaries have established vulnerability disclosure policies in the last few years not just for their election systems but for internet connected systems that are under the control of the secretary of state office. And there's more definitely thinking about it and working with some people in this room and a lot of people in this building to get to that point. So we've done a lot of work at NAS through our members in cyber leadership positions to really work to build bridges with the hacker community and improve the relationship and I really think that my members in many many states are kind of leading the way in state government in terms of establishing BDPs and even encouraging the CIO to do the same to follow suit or just working with building those relationships with the security research community and we're gonna be talking a little bit more about that this afternoon at five. Thank you. I'm the last one. All right, first I'm just gonna ask you three questions. Okay, how many of you are familiar with CISA? Okay, awesome. How many are you familiar with our director Jen Easterly and her Rubik's Cube skills? Okay, how many of you think you can do a Rubik's Cube in like, I don't know, in just a couple minutes? Yeah, neither. I can't do one to save my life so when I watch her do it, I'm amazed. And I also wanna just acknowledge everyone here on the panel, it's been really fascinating to hear you and I love this idea of being in government because you see these issues and you want to try to fix them and not necessarily in government but other aspects of law and policy and so I just wanna give my appreciation and my admiration for you all and especially here in the community too for your interest and I just wanna thank you. So I'm at CISA, I've been there for about six or seven months, it's awesome. So if you're ever thinking about joining government please do, we are always hiring so and if you are interested, I can talk to you after about some open positions. So what do we do? We work with our partners to drive down systemic risk too and to defend our critical infrastructure. So let's unpack that, there are three main things in there, what are the who? The who mostly are our federal civilian branches so we're the primary people responsible for protecting that. We work with our state and local tribal territorial partners, private sector as well and of course with the general public with you all. How do we do it? Well, there's a number of ways that we organize how we're doing it where we kind of get our strategy and authorities and I'll go over that in a second but right now our cyber strategic plan really is focused on three main things, okay? That's addressing the immediate threats, right? So it's kind of like that blocking and tackling and making sure that the most immediate threats are stopped, mitigated, prevented, et cetera. Hardening the terrain, right? So making it so that we can't get those or attackers really can't get our critical infrastructure or things like that and finally driving security at scale. So it's a Harley's point earlier, remember reporting doesn't equal resilience and we know that and so as part of the national cybersecurity strategy and that shift and then the resulting strategies, that's something that we're really invested in is making sure that our products, our software, our hardware are secure because that is what's really going to drive resilience and security. Things like reporting and all the other things are important, they're great but ultimately it is the security of our products that will help us as well as I'll put a plug in on the state side too especially that our emergency management, right? And I'll just give a plug to my former colleague at the Belfer Center, Juliet Kiam who really talks a lot about resilience and sort of right of boom, right? I think the left of boom and the right of boom are so critically important and I don't think that we address it enough. I think we're kind of like in the boom, right? A lot, we need to go left and we need to go very right, okay? All right, so how do we work with our interagency partners? How do we fit in? So think of us as the doers and the advisors. So we get our authorities, we get our money from Congress but then we get our strategic direction from the Office of the National Cyber Director from we look to the National Security Council, we look to, of course, a big system for our big strategy and then of course within our division we have our cyber strat plan. We work hand in hand together with the other federal cyber executive branch so we're actually responsible for protecting them. We work together with state and local especially through our cyber state advisors so if you know that CISA actually has 10 regional offices which is awesome, then you may know the cyber security advisors there, the state advisors in each of the different states. I believe we're in region nine right now so I don't know if we have any of our region nine colleagues from this area. And then we do a lot of collaboration with the interagency, so the state department, FBI, NSA, the intelligence community, department of defense to help defend and of course collaborate. So what are we working on in the next year? So I mentioned these three major things that we're doing. This is really from our cyber security strategic plan which of course flows from the National Cyber Security Plan as well as CISA's strategic plan as well but let's talk about what addressing those immediate threats are. We wanna increase our visibility and we wanna increase our ability to mitigate the threats through data, collaboration, excuse me, and cyber response and recovery fund. I apologize, I'm getting over a cold so I don't know if I can continue. But I will just say hardening the terrain, understanding how the attacks occur. And then driving security at Stale. This is a really important one for me. I'm working on secure by design. So Micaela mentioned this a lot as well as the open source security or software security ecosystem. We are giving a talk tomorrow on secure by design. Please come, please come look at our draft and mark it up. We brought CISA branded red pens for you to do so so we're really excited for you to come and take a look at it. And we're looking at emerging tech and workforce as well. So those are the major things that we're doing and I'm gonna pause there because I don't wanna go on forever, but thank you. Thank you so much. We get a round of applause for our panel. And we still have a bit of time, I believe, Maurice. So if you have any questions, we covered a lot of grounds on US policy and that was the point, right? It was a US policy 101. And so you got to see a broad range of things. What do you think? What is left out? What do you wanna know more about? Please, now is a good time for your questions. And as a reminder, go ahead and step up to the mic so that everyone can hear the question. So my question. Oh, no. Oh, no. Do I just need to talk really loud? Does that one work? Check, check, check. Check, check, check. Anything? I can go over there. Okay, let's start. Go ahead while we're waiting. Oh, well. All right, so my question on harmonizing some of the regulations and things like that in higher education specifically, there's a, I would call it a disconnect between the research side of things and the education side of things. And in the research side of things, we functionally work as a federal contractor for the purposes of DOD grants and things like that. And the side of it that deals with educational data and student data and things like that is entirely separate. And I know there are talks of changes in regulation in that area. I was wondering if you guys could talk a little bit about those two areas and the possibility of bringing them together a little more and sort of allowing us to utilize our efforts in both areas rather than having the research security people and the other security people and we're like waving each other across the way. Great, I can jump in and then if anyone else wants to chime in as well, feel free to. Really appreciate your question. I think it illustrates why regulatory harmonization is so challenging because it is sector specific in many cases. And so academia may be very different than the types of regulatory harmonization that's needed in the water sector or utilities. One of the things that we are hoping to do with the request for information in the RFI is to pull out some of those sector specific nuances so that we can say, all right, we can engage with both independent regulators as well as engage on with the sector, with private sector entities or universities on those particular questions of how do we ensure that regulation is not conflicting with each other or overlapping or hopefully not discordant at all and where the gaps are. So I think you mentioned that as well. Sometimes there are gaps and that causes some of the problems. I think that's very much a part of what we're hoping to do. And in the next couple of months, in the next couple of months, the goal is to solicit information from the general public, solicit information from the departments and agencies that we work with and come up with plans to harmonize regulation in each of those sectors. So happy to talk with you more afterwards as well, but definitely encourage you to contribute to that process of informing sector specific regulatory harmonization. I thought that was a great point that you raised. I had not actually heard about that particular regulatory quirk, but I think it makes absolute sense to, one of the things that I think we're seeing is more organizations are taking on cybersecurity departments like in governments and nonprofits and companies. We want to avoid having those cybersecurity departments siloed, I mean, really it should be a horizontal, across the organization sort of function. And it sounds like what you're describing is the separation between that function and the folks that are working with student data, which is not really sustainable, right? Student data is some of the most valuable assets that our schools have. And so, yes, I think that's an excellent point to Makayla's point. I would encourage you to talk with folks about submitting comments on that very issue to ONCD. Like, I think we take for granted a lot of times the willingness, the openness of US government agencies to feedback from the population. But there are formal processes where you can submit comments and make a difference. And I do believe that ONCD will read it and we'll do what they can. Thank you. Did you have a follow up or is it? No, that was it. That was, you know, we're doing CMMC for our research things but we're not necessarily doing that for our student data at the moment. So, yeah, that was pretty much it. Thank you, this is very... Thank you. All right, and we had at least one other question. Yeah, yeah, yeah. I'll try to give you a brief. I kind of wanted to do a concern, then a question, then a compliment, if you don't mind. So, coming from a technical IR leadership role, I feel like my job is going to be increasingly about how I can align facts to avoid a material incident. So I'd like to ask if you, what your honest definition would be of a material impact in a generic enterprise and also just a appreciation for kind of a thankless job that I think you do because it's not easy. So, yeah. I'll start, if you don't mind, just so that your initial question is kind of like, you know, you wanted to, for those of you that didn't hear, it's kind of like you said, part of your job would be to align facts with a FOIAable record of the incident, right? And so, did I get that right? I guess, yeah, if I could use street terms, I guess. I mean, I feel like executives are going to want us as incident responders to find anything we can to avoid calling something a material incident because the SEC says you got four days now. As soon as an incident pops off, you got four days before you got to report it. So, there's a, the law, so the SEC's law or regulation is unique insofar as it will go public. So, to be clear, the other incident reporting regulations that are out there, almost all of them are confidential. And so, for CERCIA, which is a major one for critical infrastructure owners and operators, and we say critical infrastructure, it's a big, big chunk to the economy. Those actually are exempt from FOIA, and there's a fair amount of anonymization that is supposed to take place when you submit an incident to CERCIA. The SEC's rule is completely different, as you know, and is public. And this is something that the security community raised repeatedly with the SEC as being potentially problematic, but they went ahead with it anyway because their mandate is actually not to enhance security. It is to provide information to investors, like that is what it is. And I think that you have, you raise a good point. It is not actually even just about incidents, it's about your entire security program, your entire security posture, because those are things that have to be disclosed under that same regulation. And I think that where it will move is not to deny facts. I think that you're gonna be releasing it publicly, there will be pressure to dress up the facts in a way that is better for public consumption, right? But that doesn't mean that, but that doesn't mean being false about it either. I think what it means is looping in your legal team and your corporate communications team earlier on in the process that when you do make that disclosure, you're accurate, but you also, you're clearing the message that you want to get out there to the public, because now you're playing to a different audience than if you were just submitting it to say CISA under CERCIA. So I don't think it's about being false, I think it'll be about more cross-collaboration with other arms of the business. And if I could just jump in on that, I don't know if you guys can hear me. This idea of secure by design is super important, right? You can advocate for that, because if you're the ones who are kind of getting, you know, not in trouble, but you're concerned about that, again, let's shift that burden way, way left, right? To the manufacturers of that software that can ensure or at least work to make these products secure so that there's less attacks, right? It makes your job easier. So, you know, let's generate that demand as well from our vendors. All right, thank you. If we have time, let's do one more question and then end it. We have like three people that want to talk. We can take more questions. Let's do, oh my gosh, we have a whole bunch. We're gonna do this forever. Go for it. You know, different device types, network devices, medical devices, election devices, IOT devices have a lot of similar challenges in securing. It's wonderful to see the progress that's been made in the medical device field over the last 10 years. How are agencies working together to learn from overcoming some of the challenges there? What should we expect in the next two, three years as far as collaboration? On medical devices specifically? Across all device families, really. I mean, we see medical devices and election device, for instance, the risks are high in both, obviously. How are your teams collaborating? Yeah, I think that similarly between the two communities, when security research starts, when the attention shifts to your industry, everybody gets a little scared, right? Like nobody knows what it is before the collaboration starts between the security research community and the manufacturers and the policy folks and the implementers. And so we've come a long way in terms of that education and building those bridges. What I will say is we do wanna learn from what works. So we engage NASA immediately engaged, or not immediately, but very quickly when I started at NASA, and that was their first time bringing on someone with a cyber focus, I engaged with I am the Calvary because I knew that they had, we knew their history with the medical community and we've, Bill Woods is a good friend of mine. We work very closely with them, we've brought them in to talk about success stories and we have some NAS folks here whose first stop was the biomedical hacking village to learn how it looks pretty different from the voting village, so, right? And so there's still ways to go and obviously the manufacturers play a big role here and I don't speak on behalf of the manufacturers, but there is a lot of work being done with them too and so I think that we are, we've come a long way and we, but we recognize that there's still a long way to go, but to your point, we absolutely want to look at those success models, like the medical success model and learn from it and try to implement what is similar, right? Because there are some similarities and there are also some key differences. I think an important difference with medical devices, and you heard Suzanne Schwartz talking about this earlier, is that they have a specific regulatory authority on cybersecurity and a lot of, I mean we don't have that really for just general IoT devices in the United States. That is changing overseas and we talked a bit about the cyber resilience act, there are a few other things that are happening in Europe that will I think require secure by design as well as post-market cybersecurity actions, which I think will potentially lift all boats outside of Europe, but that is one key difference. The other, I think another thing to point out is the trust mark that we talked about earlier, the IoT trust mark. There was movement in and out of the room, so I'm not sure if you were here, but there is, okay, so this is one way that I think the U.S. government is laudably trying to extend this coverage to other IoT devices outside of just medical devices to have at least demonstrated baseline of security and then to some extent, it is going to be up to consumers. It'll be up to consumers to vote with their wallets and show that security matters to them and buy the device that has the trust mark. Yeah, and just to jump on that too, we know that there are challenges, especially in say OT and those devices, as far as backwards compatibility and of course the massive investment that has to last several years, so but we know that through a lot of engagement with that community and so as part of our secure by design work, we are engaging with a lot of different manufacturers both on the software and hardware side because we want to understand that and of course we also want to understand the consumer side of things too, understand those investments and those needs, so it's sort of a long journey for this understanding but that's something that we are prioritizing. Thank you, great. We had a couple of other folks that wanted to ask questions. Let's go ahead and get two more questions. Hey there, thank you guys for holding this panel and I came a little bit late so I apologize that this was already covered but I have a question. The medical devices and also the MSPs that you mentioned for election security, are there plans or current measures in place to kind of like if you have an MSP who uses a subcontractor or another vendor or you have a vendor who uses a vendor who uses a vendor, what kind of measures are in place to kind of make sure that the vendor inside the vendor inside the vendor is compliant or upholding these regulations because what I would be particularly concerned about is kind of a game of telephone where the vendor says yes, we're compliant, the manufacturer just hits that at face value and there's not really investigation so can you guys speak to that? Yeah, so I think one of the things that we're thinking about on the software side is a software liability model that can help address some of those challenges, the kind of kick the ball, kick the can, whichever the analogy is, that we know is a very challenging topic to address both from a practical standpoint as well as from a legal standpoint so that's one of the things that's in the national cybersecurity strategy and that we anticipate working on for multiple years, that's not something that's gonna be able to be addressed within the next year or so but the intermediary steps that we're taking to determine what that model should look like and include a lot of engagement with folks who are in this room and in other spaces as well to understand what are some of the potential downstream effects if we make certain liability shifts. So just very quickly on the MSP side for state and local government, so there are a lot of state efforts that are happening related to what requirements should be for MSPs who work with state or local government, I'm not an expert on those by any means, the one thing I'll say is the Louisiana Secretary of State's office was very involved in that legislative process and now in the implementation process for what they did post 2019 ransomware incidents in the state of Louisiana that the CIO for the Secretary of State's office will be on the panel later today at five and he's gonna talk a little bit about that. So in most states, the Secretary of State's office wouldn't necessarily be leading the way on that but what I will add on the local government side, I mentioned a lot of local townships, municipalities, counties, I'll work with an MSP and it's incredibly complex but one thing on the election side that is being done is lowering the risk that you're, what is that MSP doing, right? And what actual risk does that have to the elections, right? Often you're coming down to just like IT operations for the county clerk's office or whatever it may be and certainly there's some risk there but maybe it's to public facing websites and things like that. It's not to the, they're putting things in place to isolate the IT systems on which the election really relies, right? Absolutely the actual voting systems, the vote casting and tabulation systems but additionally the voter registration systems and those other IT systems that really are integral to the process. Thank you. Thank you and I think we have time for one more question, may I'm in the back? I know you've been coming up to the mic, do you wanna do it? Yeah, look. Okay, all righty, thank you. So as more regulation gets added, usually the cost of implementation goes up and a lot of companies either can afford it so it pushes a lot of players out of the market so and it stagnates the velocity which products can get released to the market. So I was wondering and that causes a lot of repetitive work to be done among companies where everybody needs to have a SOC team, a security team. So I was wondering if there's any initiative for a security framework that would allow companies to specialize in each one of those aspects and standardized security controls in a way that it lowers the barrier to entry for companies. I'll take that just to say this idea of baking security in right from the design phase. We believe and this is part of our principles of secure buy design is that it needs to be a business decision, right? And we know that there are a lot of different methods like will it pass for instance to have certain components that are already done, right? And approved and things like that and for use in other parts of the product. But truly the business leaders, right? CEO and that level are the ones that need to really push that in their company, can't just be the security team, right? Or that the technical officer or the CSO right has to be from the top. And we know that this is gonna be a shift because that decision might be relatively new, somewhat novel, but we think too, it's very much like quality, right? And again, we know that that is a business decision that comes from the top as well, so. Just the one last thing that I'll add is that the point of regulatory harmonization is to actually streamline things and hopefully reduce costs. Like that goal is part of the RFI that we've put out on regulatory harmonization so that we don't have conflicting regulators where you are having to comply with one over here and then do the exact same process, spend a lot of money, do the exact same thing over here for a different regulator. Reciprocity is a big part of that and so hopefully getting to a point where regulators will accept the results of an assessment on one hand and the other regulator on the other hand will also be able to accept that. The goal of that is to reduce costs instead of increase them. And with that, let's go ahead and please thank our panelists, I think it's been very informative and please do also scan the QR code, leave your feedback. I promise it is not a Rick roll, but you know, it's also DEF CON so you gotta be a little bit careful with the QR codes you're scanning. Thank you.