 Thanks, Sonia. Now, Robert, I want to come to you to talk a little bit about the flip side of that and thinking about some of the perils, perhaps, of introducing new technology systems, new platforms, and really how those technologies can introduce new privacy and security concerns that could actually negatively impact patient health in some ways. Could you speak to that a bit? Absolutely. I'm happy to be the tech boogeyman today a little bit. I was actually just at a talk where we were right after lunch. So this is, you know, now you guys are hangry before I was dealing with people in a coma. So it's a little bit of a nice switch for me personally. But I think actually it's most, you can kind of most illustrate this point with a story that I had. For a little bit of context, before my co-founder Nick and I started Pretennis, I was a medical student. Also in East Baltimore. And one of my real clinical interests was working with HIV positive patients and I worked in an HIV clinic for a lot of my clerkship time. And one of the things that I noticed there was that a lot of patients were very, very hesitant to be forthcoming. They would ask questions like, where are you putting this data? Who can see this information? When you're putting that into the computer, where's it going? That type of thing. And it really made me think, you know, I heard it a few times and I heard it over and over again. And I also saw this with psychiatric patients as well. So people who had sensitive diagnoses of some sort or another. And I really began to dig into that question and just start to ask, how are we protecting this data? What's the current state of this information? And you start to realize two really terrifying things when you just scratch the surface of that problem. The first is that the basic cybersecurity hygiene and protections that we have in healthcare are probably roughly five to ten years behind industries where you have similarly sensitive data. I mean, really, I would say terrifying structures with regard to how we protect that data from what you might think of as like a traditional external network-based attack. This is improving in some ways, but for a little bit of context, while comparable industries probably spend about 8% of their budgets on this type of problem, in healthcare we have about half a percent on budgets for most places. So pretty bad situation. Those numbers vary and they're difficult to get, but that gives you a little bit of context. The other thing that I began to realize that will be intuitive if any of you happen to be clinicians or people who work in a medical record, is that actually the real problem for a lot of health privacy is insiders. It's people who already have access to the electronic health record. And in healthcare, we've got two really big problems and trends. We've got one where we are opening up the data to all these different sources. We're creating all these new linkages between individuals, between institutions. We've got health information exchanges. We've got increasingly interoperable systems in our communities, and that's really great. As a student, I saw that all the time. It was really important to be able to get the complete medical record. But simultaneously, there's no controls over who can access that information in almost any case. When I was a medical student, if a, let's just say, high-ranking DC VIP came into my institution, there would be nothing essentially that stopped me from taking a look at that person's record, or likely for anyone to know that I was even in that record. So you can imagine as a first year or as a medical student, even as a volunteer actually working there in many cases. And this is essentially every hospital in the United States. So there's no exception. In fact, the place I was at was really advanced in this regard, and this was still the case. And so what ended up happening was, you know, my co-founder and I started to realize, hey, we think that there's a much better way to do this, and that's how we embark down that path. I used to be a, what's I guess called a quant at a hedge fund, Bridgewater Associates. He used to be in the intelligence community and a former Green Beret, and we said, look, you know, in finance and in national security, this can be done very differently, and that's how we ended up tackling that problem. But just to give a little bit of context of what you see on the ground, things are getting better, but we've still got a long way to go, and that's some of the work that Dylan and I and Ian are working on right now, along with many others at New America, to try and create not just, I think, where we're focused right now, which is how do we stem the gaps and put band-aids on things, but actually, how do we actively articulate a more proactive vision for where our industry should be in five years, and have that be a more constructive look at the future? I think one of the questions that would strike me in thinking about how much access everyone in the ecosystem has, is why? Why would a first year medical student have access to, say, a VIP DC insider? Yeah, so it's a great question, and it comes down to two things. And one's a really good reason, and one's not so good. One is that there's a real culture of open collaboration in health care that stems from two areas. One, it's a very collaborative, it's a very exploratory, it's a very, I think, academic community, especially at many medical centers, and so people want that openness and sharing and teaching. But then two, you've got the emergency situation problem, which is if you don't have access to an allergy when someone comes into the ED and you've got to push a particular drug and you don't know if that drug will kill someone or not, then blocking someone with traditional, say, role-based access controls is a lethal decision from a cybersecurity capacity. And so essentially, health care systems decided, look, we'd rather have the insider threat versus that person dying right there while we were helpless to do anything, which was literally just look up a piece of data. So certainly, the technology is there. No one would deny that, but the structures to do that are really hard. And that gets to the second piece of it, which is we frankly just don't understand health care workflows well enough to permission people appropriately. So what does a nurse really do? They could be inpatient, outpatient, research, they could be in an oncology ward, they could be in the OR. All of those have completely different contexts and different types of patients they're accessing and different ways that they're using the electronic health record. And if you think about it, you probably have the equivalent of millions of different roles in a health care system, even if there's tens of thousands of people. And so a lot of your basic security paradigms for protecting data inside an institution just completely fall apart, which is why kind of more advanced analytics are coming about as an alternative to that approach that are more behavior-based. It's clear to me, hearing from both of you, that there's a balance to be struck and taking advantage of and leveraging these technologies to improve patient health outcomes while being mindful of the risks that they introduce. So rather than have one of you play the cheerleader and one of you play the boogeyman, can you think through sort of critically where that balance is and how we can be mindful of the risks while also leveraging the technologies at the same time? I would really echo that patient involvement side of things. I would maybe even take it a step further and say that I think we need to start having mechanisms of transparency where consumers of health care, I should say, should really be able to understand what the cybersecurity and privacy posture is of the institutions that they're going to entrust their data with. Trust is a two-way street, and if we don't have that transparency, I mean everyone here has gone into some form of doctor's office or something, and basically they say, here, let me give you a stack of 10 papers that you're going to sign, essentially giving all the rights away to every one of the most sensitive pieces of data that might be in your life unless you've undergone like a top-secret SCI clearance. That's basically the only thing I can think of that's more invasive than a medical record for many individuals. And so when you think about that, it's all about, okay, well, what are you going to give me back, institution? What technologies are you using? What cultural elements are you using to protect my information? And I think that is a huge piece that we can implement. I think a second thing is, and I see this a lot, I've been a clinical researcher as well. We always talk about, and I know we're talking about AI and all these other technologies and buzzwords that have lost all meaning to me at this point, having been in the field for a decade now, but we always talk about these technologies in terms of how are we using the data to improve outcomes and to do more clinically focused analytics, but a lot of those same tools can also be used to protect data in a variety of different ways. And I think we just need to have a parallel track of investment and thoughtfulness about how we're using these kind of sophisticated techniques to defend our institutions as well as we are using them to implement improvements to patient care, which should come first, I agree, but really does have to happen in parallel. The final question that I have relates back to Emma Coleman and DePyne Gaucho's panel about bringing technologists to the table, and so I'm wondering if you can speak a little bit more concretely about bringing technologists into the healthcare space and encouraging those collaborative dialogues between patients, technologists, policymakers, how we leverage public-private conversations to really deliver the best outcomes, and maybe if you've seen an example of that in your own work, Sonya or Robert, if you can speak to what that looks like in helping deliver the best patient health outcomes. Yeah. You know, I always think about it, having been in health IT specifically for about the last five years, as it's really a two-part problem. And I think that it requires a bridging of the gaps between two mindsets. On the healthcare side, there is very much a culture of no that has emerged around technology. And it's all about, okay, let's protect patients, let's protect patients so we can't do something new. We're going to do it the way we've always done it. This is beginning to shift in some ways, but we need to start thinking about the long-term, which is if we're really going to protect our populations and the long-term health of our nation, we can't do it the way that we've always done it. And so I think we need to move into that shift from a no all the time into a yes but let's be thoughtful. And I think that that's a really important cultural shift inside of medicine. And simultaneously, I have a lot of friends on a certain coast that when they go into healthcare, are often just focused on like, let me just disrupt whatever I can. I'm here to disrupt. I'm just disruption as a service today, right now. And I was talking to healthcare CIO, we were at this conference, and he said, the last thing I want to hear is someone is going to disrupt my hospital. Okay? That is not a word you want in a clinical workflow. Does a surgeon want disruption in their OR? No. And so I think we just have to start to think about the nomenclature that we use and the thoughtfulness and the ways that we're entering that space as technologists and being respectful both of the cultural norms as well as the unique challenges. Because there are elements to that that are definitely playing the entrepreneurship game on expert mode or the policy game or other pieces of that. And I think we just both need to come to that understanding and concordance. And I think it's events like this that are helping to build those bridges that are just so, so important. Well, I hope you'll join me in thanking Sonia Sarkar and Robert Lord for being here. Thank you, Dylan. Thanks, everyone.