 Welcome everyone joining us today for this critical conversation around the intersection of cloud computing and healthcare and life sciences, specifically focused on the secure and sensitive nature of much of the information flow that happens in those types of missions. This panel will discuss recent use cases highlighting best security practices for cloud computing and healthcare and life sciences. Today, we're going to focus on three main takeaways, and our four panelists will provide insights on those areas, and we'll also be doing live Q&A, so roll up your sleeves and get engaged with us. The areas we're covering today are the benefits of using cloud for healthcare and life science data and analysis, including scalable resources, ease of collaboration, amalgamation of data, the use of AI ML applications, and replicability of research. The second we'll be focusing on is building security and privacy needs into platforms as well as every layer of cloud-based projects, and the final one is the implications of security for the data and IP of individual biomedical projects, organizations, not just specific products and capabilities, but also the broader bio-economy and biosecurity. We are so lucky today because we have four of my favorite people. Matt Hazlitt from the FDA, it's CDRH, is our FDA regulatory guru. Welcome Matt. My name is Andrea, Matt Sufran, sorry, is our law policy tech cyber guru from Penn State and is really going to be talking to us today about legal and policy issues. Dan Prieto, on cyber security from Google, is our cyber security guru, and finally Michelle Holko, HCLS SME from Google, is our biomedical research guru. So with that, welcome to our session today, and as you can tell, we're excited, we're just going to jump right in. So Michelle, let's level set, help me understand without going too much into the basics, what is cloud computing, and what are some of the benefits of cloud computing? What are some of the trends around use and adoption in the healthcare and life sciences sector? Yeah, absolutely. Thank you so much, Alexis, and thank you for all of the panelists for being here and for everybody who's joining us today for this discussion. So this topic is near and dear to my heart because I have recently joined Google Cloud that have been in the healthcare and life sciences space for a long time and have actually used cloud-based platforms to do data and analytics in the biomedical space. So basically, cloud-based computing is just leveraging on-demand compute system resources including storage as well as compute, distributed teams, sharing resources, and being able to build in economies of scale. And there are a lot of reasons why cloud computing is being added to the healthcare and life sciences space. I'm going to talk through a few of the use cases that we'll go into a little bit more detail throughout this discussion, but some of the driving factors are number one cost. It's actually a lot easier for young organizations as they're getting started instead of investing in a lot of overhead, a lot of equipment. They can just kind of spin up some cloud environments. Another really big one is scalability, especially when you're looking at specific data types that require a lot of size to store as well as to compute across including images and genomics data. And then a third really important reason is the ease of collaboration. One of the reasons, one of the enablers for AI and ML, machine learning, and artificial intelligence-based studies is you have to have like a critical mass of data in order to make those types of studies possible. And so being able to really bring all of those types of data together across different organizations, across different laboratories is really an enabling factor. And then going along with the collaboration piece is reproducibility of research. There's some types of crazy number where I think it's like 30 to 50 percent of research is actually not reproducible. And so cloud is actually enabling a lot of that type of a barrier where now if you're data is in a cloud-based system that researchers and reviewers can access with the tools that you use, then they can go in and they can look at those. They can tweak some things and see if your research results still make sense. So those are just some of the reasons why this transition to the cloud, why we're seeing an increased transition to the cloud. So just to give you a few examples of what's in scope for this discussion, because it feels like a very broad space to a lot of us. And so it's kind of comforting to think about, well, what are the areas that we're going to think about and consider today? So some of them are certainly the biomedical research space, which I've already alluded to. And this can be in the clinical, epidemiological, as well as biomedical or molecular space, including omics data. So again, some of the reasons why this is important is because of the data types, genomics and image data can be very compute-heavy as well as storage-heavy. And then another is the real-world data or personal generated data types. And so these are oftentimes coming off of devices that are being used either in the healthcare space as medical devices or in the health-adjacent space, and the trackers come to mind. And these types of devices are, instead of generating single point of data, they're generating continuous streams of data. So it's actually a very interesting space because there's a lot of discovery to happen in terms of how do those continuous data streams relate back to the physiological state of the individual. So there's a lot of interesting promise there, but it also means that it's a lot more important to make sure that those types of devices are secure and that those data streams and as well the analytic algorithms that are being, that are doing the translating of the continuous data into the real-world insight that those are also secure and protected. Another area where we're seeing a lot of cloud adoption, as I think previously alluded to, is biotech startups, so new organizations coming out of academia or other companies that, again, they don't have a lot of overhead yet. And certainly we're seeing a lot in the times of COVID where people are working from home. So maybe they have a work office space at home where they're doing a lot of their data analytics and the cloud makes that really easy so that you don't have to have a lot of hardware on site. And then finally, hospital systems, EHR data, electronic health record data, moving from paper-based systems into a fully digital type of a system, and then all of the areas of interoperability. This feeds then back into the research question because oftentimes, you know, it's different data types having to be combined in order to really have insight. So really being able to build interoperable systems for electronic health record data that can then be utilized with the biomedical and molecular data. And yeah, so that's just, you know, a basic summary of where we're talking about, you know, in terms of where we're focusing our discussion on cloud security in this space. Now, I appreciate that. And I think one of the things that I'm really hearing is that we have the potential, you know, to be sharing more information than ever really to move at light speed. But we've got to be careful, right, as we do that for all of the reasons that you mentioned. I want to push this to all of you. So anyone jump in that wants to. But what are some of the specific security challenges at the intersection of cloud computing and health care and life sciences? You know, are there examples? What can we learn from these in that idea of moving to light speed but carefully? Dan here, I think a couple things. I think when you think about security as it relates to health care and life sciences, a couple regimes come to play. And I think this will be a good setup for Matt and Andrea in that, you know, when you think a lot of times about just plain old information technology, let's not think health for a minute. A lot of times you're thinking about computers and data. But a lot of times that body of cybersecurity best practice can often be treated as separate from a privacy corpus of sort of rules and regulations, particularly if you're not dealing with customer data. In the health and life sciences space, those two regimes automatically overlap and overlap significantly. So you need to think about security and privacy at the same time. That also then therefore brings to the fore how you think about the regulatory regimes. There's HIPAA, there's SOC2, there's high trust and all these best practices that are out there just as a general matter for anything health care related. So I'll leave further discussion of that to Matt and Andrea after my second comment, which is Michelle did a really good job of pointing out the different business models in which cloud and therefore cloud security is implicated, right? There are traditional long standing models, health records, for example, where what is afoot on the technology side is still a long-term attempt to transform away from paper records. So there's a huge piece that's related to digitization, consuming the paper, this mound of paper that you existed on previously and moving to a new model where you both have the old paper, which is now digitized, but also have structured sort of digital data from inception through more user interfaces, patient self-scheduling, more electronic health records. You're inserting in the middle of an old business model, the cloud. So you need to ask the security question relative to what is going to be a longstanding, decades-long standing set of security practices related to the old stuff, and so that is going to be an enormous culture change. On other models where, for example, it's the Internet of Things, it's new medical devices, it's an adjacent thing like a Fitbit, or it's a glucose monitor that is constantly sending telemetry off so that your doctor, your patient care provider, and your nurse can see the data and help you manage, for example, your diabetes or some other chronic disease. Cloud is native in that business model, right? The device itself can't store the data, so automatically that data needs to be securely sent to a repository in the cloud and read by a healthcare provider from that repository in the cloud. So you're dealing with less legacy security architectures, but now you're also not dealing with the security of the cloud where the data exists. There's a whole host of other questions which people have focused on, on the security of the device itself. Where was it made? What countries was it made in? Where is the software inside of it coming from? How do you do updates? And things like how do you do updates are a particular concern, given the breaches we've seen on the supply chain side. Software updates end up being the vector from malicious code to enter. And when you get to devices like that, it's no longer just an academic discussion, not even academic, it's beyond a discussion about is the data secure. It's a question as to whether the device is actually doing the right thing. Because there's a risk if data is tampered with or incorrect, that all of a sudden a device that's doing dosing for you gives you the wrong dose or sets your heart rate if it's a heart, like a pacemaker, at the wrong rate. So it's actually the equivalence for the physical body of the kind of control systems that we've seen be of concern as relates utilities. Water utilities, electric utilities, the shutdown of this meat processing plant, which started on the IT side, but you're starting to see concern about control systems. And then the next model that Michelle brought up again, the security aspects are slightly different. If you have researchers collaborating on a shared data set, how do you ensure that the people that are contributing to the data set, using the data set, copying portions of it, are the right people with the right access? And it really raises the potential there to begin to implement things that have been at the forefront of late with IT security, zero trust. Do I know who you are? Do I know to what device you're using? Do I trust those things when I make an access decision for you to actually access the data? Do I authenticate properly? And does all the data properly encrypt it? So it can't be tampered with or if it's exfiltrated, it does not create a data spill that might be a privacy issue. Or overall jeopardize this sort of research agenda. So I think the security pieces you look at, very much will ebb and flow or at least be distinct and different, depending on which business model and which value chain you're talking about. Because in each of those instances, the cloud sits in a relatively different place to the other business models. Thank you for that. Matt, do you want to follow on that? Sure. So definitely from FDA's perspective, when we're looking at medical devices, we're looking at cybersecurity through the lens of safety and effectiveness. That's our regulatory mandate. That's the way we evaluate medical devices. So when we're looking at medical devices that are incorporating a cloud environment, we're looking at how the cloud environment can impact the safety and effectiveness of the end device. So some of that is software that lives in the cloud. There's software as a medical device. So some of those are fully hosted in a cloud environment. Some of it are implanted devices that are then communicating with a mobile application and then transmitting that data into the cloud for further processing, potentially figuring out treatment parameters and sending that back. So there's a broad diversity of how cloud implementations can be utilized by medical devices. And you see a lot of medical device manufacturers adding cloud components into already fielded devices to increase their abilities for remote monitoring of the device itself, the patient's information that's being transmitted and logged through the medical device. You see it being used for remote support of medical devices. So troubleshooting when there's an issue. You're starting to see it in terms of delivering software updates or firmware updates to the medical devices themselves. And you also see it in terms of leveraging the added computation capability. So being able to implement more complex AI and machine learning and also more computational heavy things where you want to leverage the power of the cloud environment instead of having a larger system bedside. So there's a lot of different factors and the implications of the cloud have a very diverse set of risks that they have based off of how it's implemented in the medical device, whether it was something added to the medical device after the fact. A lot of people think of these cloud environments as essentially just someone else hosting a server for them that exists in some ether. When in reality, when you look at some of these cloud environments they're incredibly complex in terms of the different services cloud service providers offer, both in terms of security services as well as kind of the additional computation. So you look at the key management services for potentially being part of authentication schemes or providing signed software updates. All of these different factors are load balancing on the front end to make sure that your environment's protected. All of these things have a lot of complexity and a lot of weight that needs to be considered. So as you see this advent of new startups migrating into the cloud, it's a very complex environment to actually integrate into a medical device platform. I love the fact that you really brought it home for me on a personal level. I have two or I had two family members who were both dependent on medical device implants. And so it was really interesting because this was unfortunately at a time where I think we didn't have the advancement or the computational power or even the security that we have now. But their lives were very dependent on those. And I think everyone's really brought up why this is an exciting time to be moving at work speed with that level of carefulness. Andrea, over to you. Let's talk about that. Let's talk about law, IP. What are the legal interests that exist in databases with PII sets? What are the types of concerns people building and caring for the system should think about? And what might keep them up at night? Well, that's a really interesting and complicated question that will take decades to resolve for better or worse in minutia. But the short version is that there is established law around what constitutes, for example, a copyrightable interest in some of these kinds of situations. So let me just give kind of a little quick intro through the lens of medical devices. So increasingly we're moving toward the world that you've already heard about. I call it the Internet of Bodies, the idea of devices not only being attached to human bodies but also implanted in human bodies or in the third generation talking to the cloud. So right on point with this panel with potentially live feeds. That's the mental model that no pun intended. Some companies in the Valley are using and building their recreational augmentation brain implants or that you see in medical context already in use in treatment of certain diseases or conditions such as Parkinson's or interesting experiments underway to alleviate different kinds of other brain connected challenges that patients have with live data streams that are of course, footnote reliant on internet access reliability among other things. But that's a panel for another day. So when we look at these databases, whether they are local or remote and the actual information contained in these databases, the way that a court would probably analyze these things as a first instance, and this is where the law is going to develop, definitely starts with the question of exactly what is protectable in these aggregations as a first matter. So to answer that question, the idea of copyright is that there is an idea embedded in a tangible medium. So is code a tangible medium? Yes, it is. But the actual information starts to get a little dicey. So facts themselves are not copyrightable. So what you can copyright is potentially a creative arrangement. But what does it mean to be creative enough? Well, the leading case on point is one called FICE Publications Anky Rural Telephone. And what it told us there is that the white pages, for example, even though it took a lot of effort to aggregate all of that information, that extreme sweat of the brow, that's not enough to by default out of the box give you a creative arrangement. So when we look at, say, dropping all this information into a spreadsheet or into a searchable database, that's probably in itself, I would guess, not going to be enough for courts to find that there's a protectable interest just in the information. Now, as we start to talk about derived information with each subsequent layer of derivation, I think arguably you have a slightly stronger case for copyrightability of those derived databases that are then creatively arranged and used in those ways. But the other piece of this that I still see done wrong all the time in legal contexts, even by otherwise sophisticated lawyers is that the transfer of use and the licenses of these copyrights needs to be written in a very particular way. So there was a case called New York Times v. Tasini that talked about the fact that the bundle of rights that are comprised within the notion of copyright have a special digital component. And so, for example, the digital rights to use need to be transferred separately in any kind of copyright assignment. And that's something that it's a little drafting point, but people still get wrong all the time. So you'll have this battle over what is copyrightable, and then you'll have a battle over whether the right to use has been appropriately granted. And when we look at whether that right to use has been appropriately granted, we will increasingly see battles over dignitary interests, particularly in a world where end user license agreements are longer than war and peace. See a fun project called Eulahs of Despair that my Penn State pilot lab is currently finishing up that creates lovely visual models of many of the most, shall we say, expansive Eulahs on the internet. So when you get into the world where the licensing is predicated based on this fiction that people understand what they're agreeing to, and then you have maximum repurposing of this information in ways that are not necessarily foreseeable to the person clicking yes, and you have no proof that anyone's actually read it, and it's longer than war and peace, and, and, and. We start to get into a situation where the law is going to say, okay, brass tacks, nobody's reading this, nobody understands what's going on, nobody's meaningfully consenting to this, and particularly in a medical device context, and in a world where there's health-ish data being processed in the cloud. You start to have real concerns that we're moving in those instances potentially to a world where we border on maybe doing less to help people and maybe sometimes doing more to put them at risk, depending on what the terms are and how things play out. And I should caveat all of this, which I showed at the beginning, that all of these opinions are mine and do not reflect any federal agencies with which I work. There are multiple agencies, none of them have approved these comments. This is all me being a law professor and sharing professors spouting off. Okay, so that's the lay of the land, and particularly as we get into a situation where there are multifunction devices, where there are opportunities for companies to create devices that are medical in one use, but not medical in another use, or they claim it's not medical. So like, let's imagine a world where you're, please don't let this device exist, where your artificial pancreas also streams music to your ears. Okay, one is clearly a medical use. The other one is not a medical use, and one that I really hope nobody has thought of. So please do not make this device. I think they have what they have now. Please don't, please don't make this device. But anyway, so that's a non-medical use. So we have there a really dumb example of a multifunction device. But in that world, you have multiple kinds of data streams being aggregated together by a single provider potentially, repurposed in various different ways. And then when you have database mergers across fields of activity, you start to see health-ish data showing up in credit report-ish information. And this notion of a social credit monitored society that's connected to devices embedded inside our bodies, that's not the best world we can build. I'll leave it there. And we'll come back to this hopefully later on. Now I appreciate that. And actually to your point, my family member actually was suffering with Parkinson's. And so the implant was, and I remember very explicitly, a long list of things for us to sign as they were going under that surgery. And to your point, we had no idea what we signed. We just had hope, right? And so I do think it's really, you know, really critical. Dan, over to you. I think one of the things, you know, you kind of wet our appetite around, you know, a lot of the different elements of security. But, you know, since we're on devices, let's talk about the device as an element of the value chain that includes data, source, storage, components. What do we need to be intentional about when we think about the value chain and the role of devices? You know, I think you need to be intentional about sort of the aspects of the security life cycle, excuse me. We talked about zero trust. And that really is how do you protect the device and protect the data, right? And that occurs through all the things we talked about in passing at the beginning, which is basically strong identity, multi-factor authentication, role-based access, not just role-based access for the person, but their machine. It is the device in good state. What's the device posture? Then there's the encryption of the data itself. We also layer things on top, like sort of default data loss prevention. So if, for example, a device or a cloud environment is leaking sensitive data, that's something we could notice quickly and automate the blockage of it or the masking of it. And in addition, you know, for stuff that sits in Google Cloud, I think I would want people to have confidence in the fact that the infrastructure we've built, it's a proprietary global network. We're very attuned to how we manage our own hardware and software supply chain. We build a lot of our own infrastructure. We layer our own undersea cables. And so it sits, all those things I mentioned, strong identity encryption, proper authorization and access, sits on top of a strong global infrastructure. So that's the protect piece. I think security in the cloud also gives one other capabilities that are going to be distinct from traditional approaches, decades-old approaches to IT security around a traditional data center. And one of them is really scale and analytics. If you think about the proliferation of medical devices and other IoT devices that are throwing off massive amounts of data, for the most part, a lot of organizations historically have not done a good job capturing, storing, analyzing, getting insights and making decisions on all the machine data that their organizations are throwing off. But with cloud, it allows you really massive scale analytics to get end-to-end visibility into machine data. And so, for example, if you're a healthcare provider that's managing a deployed, you know, a market of hundreds of thousands of deployed glucose meters, heart monitors, whatever it is, you want to be able to see patterns in that data, to interrogate that data, identify anomalies, look for trends, and also proactively hunt in that data to see if there are bad actors or malicious code. And a lot of that is beyond the normal capability of a traditional security workforce which tends to have an overproliferation of cybersecurity tools and not enough people. So in those cases, there's a flood of data. In fact, the tsunami of data. But that more data doesn't actually make you better at doing security. It actually makes you worse at doing security because it obfuscates things, right? So to counter that, you have to be able to have that massive scale analytics to proactively look for bad things, to baseline activities, to look for things that are abnormal. And so the data actually becomes your friend. And I think it's hard to do that within house data analytics platforms. I think cloud capabilities that scale up, scale down, and can really make storage and analytics much more cost effective and democratize that kind of analytics is one of the things on the security side from a visibility standpoint that the cloud does for you. I think the other piece, and that's sort of the detect piece. So we've gone over protect. I've just walked through sort of the detect piece. And then on the recovery side, whether it's electronic health records or sort of a couple of years of data getting thrown off by these medical devices or a massive research database, we've seen a massive uptake in ransomware. And so there's the risk in general that people are afraid that that data is going to get forcibly encrypted and they can't access it or can't go back to a sort of valid and known accurate restore point. And I think that's another area where the cloud can help you, really doing enterprise wide data backup so that if that happens and you end up in a ransomware attack, you don't have to pay the ransom because you're confident that you have an overarching data strategy that is tuned with an overarching security strategy so that you know and are confident that you can go back to secure points in time of earlier versions of your data that are clean from malware and haven't been tampered with. It's basically strong and confident secure points. So again, I've sort of hit the protect, detect and then that's the recover piece. And I think the cloud helps on all those fronts. Dan, I love that you're really introducing that fact concept that we have to be more ready for more things to happen in the world. And we have to have kind of some of those great glass or from a military parlance kind of go bag, not just strategies but tactics to be able to respond and to not be in those positions. I'm actually going to jump map to you because I really want to continue this idea of the fact that there has been recent malware attacks and ransomware and things like that. Talk to me a little bit about what that looks like. What do you think about that from kind of an FDA standpoint? And then Michelle, I'm going to come to you and Andrea, I'm going to come to you for a little bit of a prod more into the idea of kind of that personal, the personal impact of a lot of this. But Matt, talk to me a little bit about what Dan said and also maybe a little bit about how people make decisions around what cloud and how they go about that. Sure. So definitely from the device standpoint as we start to see more reliance on cloud-based solutions and also just networking for that matter, you have a lot more considerations around device availability. So if there is a part of the device software that's operating solely in the cloud, you need to start thinking about what happens with the device when that becomes unavailable. The different cloud service providers indicate that they have like 99 point whatever percent uptime. But as we talked about before with the complexity of those environments, that's more of the face value uptime, not necessarily every crook and cranny that the device may be reliant upon in order to perform its intended use and deliver the therapy or perform the function that it's intended to do. So when we look at things where ransomware in a cloud instance can impact the availability of a treatment device, you start getting into real patient implications of delays of care where that reliance on that cloud-based data and that cloud-based computation when that's interrupted, whether that comes from something in the cloud itself, whether that comes from the hospital being impacted by ransomware and them severing their outward connections, there's a lot of different considerations for these complex systems of systems, medical devices, where you have a lot of these reliances, where you need to start thinking about how the end device that's bedside with the patient that's implanted in the patient are going to respond to those scenarios. So the incident response, both on the manufacturer side of how they're going to respond and address those situations as well as what those manufacturers are telling their users of what to do in the event and what backup or fail-safe capabilities the device will have when that extended function is no longer available is something that needs to be closely considered. To your other question around how cloud service providers are selected, one common thing that we seem to be seeing a lot is that the decision around which cloud service provider to use is being made at the CC level. These decisions are being made of we want to use the technology that's grabby that everyone's using. We want to use a cloud instance with our devices or we're going to use this one based off of some XYZ factor. So it's being decided by the C-suite whereas in traditional medical device development the device is set up you make your set of requirements you're identifying how to address those requirements with your device design whether that's using software or an architecture that the manufacturer themselves provide or whether they go to a third party. So much like using commercial off-the-shelf software in a medical device using a cloud service provider is really no different from FDA's perspective in terms of the response and responsibility resting on the medical device manufacturer for what risks those are imposing on the system. So when you have kind of this backwards driven kind of the ends justifying the means of you're being told which cloud service provider to use you're then trying to back architect the device to address whichever cloud instance you have because there's differences among all of them there's different capabilities different services different security services provided by all the different providers out there. So really you're trying to make the solution that was selected work for you and then you're having to deal with the validation of that cloud environment on a much larger scale. So there's a lot of different factors at play but from the regulatory perspective it's really no different than using an off-the-shelf software it's just a much more complex supplier agreement that you're entering and a much more complex set of risks that you're needing to manage on the medical device manufacturer side. Well I love what you're introducing really too is this idea that as as more things become cloud native to Dan's point that not all clouds are the same right and that you know we might be seeing much more of a collaboration in this area faster than others around things like multi-cloud right because in some ways if I can be so bold you know why would we dumb down the device right or make it worse if just because we have you know a cloud that in essence only allows us to do so much versus being able to kind of have a multi-cloud approach and the complexities maybe that you know that adds but to allow that device to be as secure or sustainable or the information or its uptime to be as robust as you want it. I think it's a really a really great point and I hadn't thought about this kind of subject matter really being at the forefront of kind of that that multi-cloud or that cloud selection process being quite critical right to the performance of what someone actually experiences every day I think that's spot on. Well Andrew take us there you know about what someone experiences every day you know so what about those of us who use you know whose data is in this database you know what do we need to be thinking about when I when I went back in time and was signing away kind of medical device you know paperwork and agreements you know on behalf of one of my family members you know help me juxtapose what I should be thinking about what should I be holding an organization accountable for. So the questions of who is in the best position to bear certain risks that question is certainly at the core of many of the decision making processes that we face when you are either a patient or someone deciding on behalf of a patient or if you are a corporate decision maker in thinking through your own risks in terms of operating business entity. So just to connect quickly with something that Matt pointed out the question of whether you have the C-suite fully informed of the totality of risks that could exist that's a key thing that not only security professionals but the general counsel and individual constituents patients and other users of products can take time to connect with the organization and inform the organization about it. So let me just run through a few key points. So one of the common mistakes or stories that I hear from C-suite folks is well you know we have to pick the most flexible cloud system so we can maximally exploit the data in the future in ways we have not determined yet because we are bound by fiduciary duties to maximally exploit short-term revenue quarter to quarter. Nope not what fiduciary duties say. Fiduciary duties say that you have to think about the long-term best interests of the enterprise. The long-term best interests of the enterprise may not be aligned with maximal short-term profit in any given quarter and they certainly aren't aligned with skimping on say cloud security and losing control of your entire set of sensitive information connected to humans who may suffer physical harm in some cases as a result of your choice of cloud provider. So usability is another key piece of this. What we see in cloud situations is that security mistakes often happen because of weak usability testing and folks making good faith mistakes because the design doesn't set them up to succeed. So when companies are thinking about cloud providers and when users are thinking about which companies to do business with, that issue of whether companies are setting folks up to succeed should be part of the calculation. And it's also a risk limitation factor in terms of potential legal exposure down the road. If you can explain to a court that you went through a careful, thoughtful process trying to set people up to succeed and you identified key mistakes that people interacting with the system make and you took affirmative mitigation steps, that's a compelling story talking about a careful entity trying to make sure that harm does not happen. It's a very different story from one where the usability testing is slim to none and the documents, the legal documents haven't been usability tested. There aren't regular audits, not only in terms of the nuts and bolts of technical security controls but also in terms of the way that humans interact with the system. And when you can tell those good stories, it helps. Now insurance is one more point that I'll flag because companies frequently think, oh, we'll just ensure our way out of this. And that short circuits the threat modeling process and they sometimes overly optimistically think that that resolves the challenges of cloud security provision. It does not. And in fact, this is one of the common mistakes that's happening more frequently is that people don't necessarily read their insurance contracts well and that the defined terms in some of these agreements leave intentional ambiguity for the insurance provider to subsequently refuse coverage after an incident and just pull it into litigation. So a key strategy that exists in some insurance companies and in companies across sectors, this is not limited to the insurance industry, is that when you have deep pockets, your goal is partially as an aggressive plaintiff, an aggressive defendant to bankrupt the plaintiff into settling. So if you are not that deep pocket provider, it's very important to not over trust that your insurance will cover things and the carve outs really matter. And so that's just a cautionary note that I'll flag the last point that goes I think most directly to your question is that there's an inherent tension in data curation that sometimes exists between traditional social science and medical methodology in thinking through what kinds of information will lead to the most significant breakthroughs in terms of research and the norms of some machine learning systems and other development processes. City intention with this, where you just kind of throw it all into the soup and stir it up and see what happens. And so when you have two different models, one that's driven by high quality, reliable, slower research, and one that's kind of more on the fly with a little more everything goes with outliers being included in. You start to see new kinds of risks emerge that are not necessarily fully considered and the prior models of differently curated data don't adequately encompass because they've been consciously risk mitigated. So legal liability will take that kind of an analysis into account at the end of the day. So those are just some cautionary notes that connect with this idea of the balancing act between individuals and the companies. One quick note on that. In a lot of cases, you're starting to see insurance companies and since Andrea brought up insurance, balk at the idea of actually being responsible for certain types of cyber attacks, particularly the ones where the malicious actor is a nation state. And you see cases coming to court as to whether those are sort of normal course of business cyber things that they need to pay for or those are basically aspects of war between nation states that are exempt. And so it raises a lot of larger issues about how risk is addressed in this space or not. One of the things that you're all making me think because each of you have brought really an interesting angle to this. And one of the things I can tell you is that I have a lot of empathy now for our executives out there who really are having to become an intentionally kind of digitally savvy in an entirely new age. Whether those are executives in public sector, whether those are device manufacturers, whether those are other types of providers, it's really interesting if you think about just in the session alone, the different insights and the complexity of the things that have to be thought about and navigated, huge amounts of promise, but also just huge amounts of making this work I think in a being savvy in this way. Quickly, I'll go to you, Michelle and Dan and maybe a little bit of a two for here. What are some of the things that Google or that other groups might provide in terms of healthcare and life sciences? What does this look like practically and how do you make sure that they're secure, that that person, that senior executive can kind of sleep at night after hopefully if they've gotten all their insurance and other complexities taken care of, but what are the things that are out there that help and that work and how do you give someone that ability to sleep at night? Yeah, absolutely. So I can start and then Dan, if you want to chime in, certainly having built in healthcare security and compliance tools is critical, not only for the healthcare side of things, but also for the research data side of things. And we definitely have tools around privacy of research subjects, but also around the healthcare data. And certainly another really interesting and fun capability is our tools to aid with the identification. So not only the identification of records, but also of things like radiology images. And that's really important because certainly that's an area that is ripe for research. But again, there's also the risk of patient identification there. And we want to make sure to protect individuals. And this goes back to what Andrea was saying about the consent process is also a really critical piece of this equation. And certainly from a research perspective, I've seen a lot of research studies do a great job of building in new ways of doing consent where people actually understand what they're consenting to. And they're also able to change their consent choices over time. So I think that's a really great innovation. And then going back to what Dan said earlier, the fact of the matter is this is because the technology is evolving so quickly and because we're integrating the technology into this space so quickly, we have to be really thoughtful, not just to be compliant with current regulations, but also to be forward thinking about what are the actual security risks and how can we start to protect against them before we have to? So really taking that to heart, taking that protection piece to heart, I think is really critical. I think another thing, Alexis, is to make sure, and I hope this goes without saying, but that non-technology and non-security executives really need to get comfortable and understand a couple of things. Number one is how the security affect the mission and your overall business and reputation, and take that to heart. That itself should be enough of a motivator to not treat security as a black box. But even going beyond that, I think it's important as they think about their business models and how critical technology and in many cases, cloud technology is in terms of enabling their business model. They need to be very intentional and very regularly be aware of what their value chain and life cycle of data looks like. Again, whether it's electronic health records or device or research, they should understand how data is born or required, how it is handled, how it is protected, how it is augmented, what metadata gets created off of it, how it is passed on to other people, how people interact with it, how it flows back and forth between them and the customer, how it flows back and forth between employees and collaborators. And that actually, simply being intentional about that, I think take security out of what is too often a black box. Oh, it's the IT guys that's over there. But it actually puts it in. I'm going to ask basic questions about how data changes hands and where it sits. And when it sits in certain places, how is it protected? You can ask very lay questions, but everyone needs an end-to-end mental model of how data flows through their enterprise. And if they do that, again, it really collapses what is often too much of an arm's length relationship or a chasm between tech and non-tech. And if they take that kind of ownership of it, they should be able to sleep at night because they're constantly asking tough questions about how all this stuff is handled and protected, number one. Number two, they have a mental model for how things flow and how it relates to their business success. And therefore, by taking ownership of it, it just arms them much more readily to not be surprised, to not be overwhelmed by a technical security update that often is filled with jargon and it's a particular expertise. But they can ask questions and be confident that they're getting the right answers because they have their own language to deal with these security things. I don't know if you could hear it, Dan, but I think the whole cadre of Public Servant Chief Data Officers are cheeryed in the background. This idea of really getting those C-level executives to really care about the data, these stewards of data. And really, in some ways, what I'm hearing from all of you is that in some ways, almost every organization is a data company now. No matter what you're doing, no matter what you're delivering, this idea of being a data company, and it was actually funny, someone said to me the other day, what is it that a public servant does? And I thought, well, what I did was I took in the best information I could. I had the great privilege of trying to decide where those resources would go based on that information to try to have great impact in the world. And then ultimately, I looked at more information to decide whether or not we had done it right and how we might tweak it. And so what I love that we brought together at the end of this is all of you really helping the folks out there understand that this is about information stewardship. This is about data stewardship. This is about that intentionality, that curiosity that we really all just have to have to be not only up to speed with all of the things that you raised today, each of you, but more importantly, to be champions and to be pushing the envelope with that curiosity. Are we doing it as safely as we can, as effectively as we can, to the best stewardship as we can. So I'd love to ask each of you, though, to... We've talked about a lot of things people have to think about, a lot of concerns out there. Hopefully people are going away with a lot of notes and hopefully a lot of questions for our Q&A that will be coming up soon. But what I'd like to ask each of you to leave, and it can just be a quick response, grant us in the fact that this is still amazing technology that's letting us do amazing things. So for each of you, what does getting this right look like? What might be different about health, biomedicine? What might those breakthroughs be in five years if we're really harnessing cloud in a way that is secure and appropriate and that we're really leveraging the power of cloud? Each of you maybe give me a quick summary of what's that thing you're hoping will be different because we've leveraged the power of cloud securely and appropriately. So I'm going to throw that to Michelle first. Absolutely. Yeah, so for me, success really came very clearly over the last year when before coming on board to Google, I was working with CISA and the National Risk Management Center to protect the assets around COVID and developing the vaccine. And really the goal there was to save lives because we knew that any delay in creating the vaccine was going to equal loss of life. And so it was a very, it was a very clear mission set where, you know, the more secure we are in the healthcare and life sciences space, the more live we will save. Another, I have three, so that's number one. Number two is precision medicine. I think that there is tremendous opportunity there to really crack that code and instead of doing one size that's all clinical practice to be able to use genomics, bioinformatics, metabolomics, et cetera, to really tailor clinical practice to each individual and to make people healthier so that they not only live longer lives, but that they're healthier and they have better quality of life throughout their life. And then the third is, you know, really protecting our nation's bioeconomy. This is a critical sector, I think, of growth and really of national security significance. So, you know, making sure that we're able to protect that and create nice, really amazing technology, you know, for our country's security would be amazing. Well, Michelle, I know you're going to do a lot to contribute to all three of those things. Matt, I'll go over to you. What's your five-year hope for us? Oh, I'd say that definitely in terms of using the capabilities to be able to more closely monitor and respond to the evolving healthcare challenges that we have by leveraging these capabilities. Definitely the pandemic drove a massive push into more remote monitoring, more remote care, and the cloud definitely is an important piece of enabling that moving forward. And I think just from the larger FDA perspective, I think that in terms of reframing the cybersecurity considerations of the cloud, of it extends beyond just data security, potential PHI, PII data releases, and data breaches, that patient safety and effectiveness is on the line, that this is a central to the long-term healthcare delivery in this country and across the world is leveraging these systems and doing so thoughtfully and putting that careful thought in consideration because it's more than just data on the line, it's actual patient safety and effectiveness of the medical devices. Well, I know I speak for everyone here that I am so glad that there are public servants like you and your colleagues at FDA that are thinking about these things and are making them bigger and helping us think and be more curious about these larger issues. So thank you for the way you're going to advance us in the next five years as well. Dan, over to you. I think I'd echo what everyone said. I think what's really on offer with the cloud in new healthcare business models, whether it's devices or research and collaboration or transforming medical records and the like is really the fact that all of the patient outcomes that we think about really rely inherently on data, the quality of the data, the ability to take lessons off that data, the ability to treat better, to make better decisions. And so when you think about data, again, what cloud offers are velocity and scale. And with velocity and scale, again, you can get better patient outcomes. And as I said before, the cloud sort of enables that, but it is a different model for a lot of enterprises to be managing data this way. I mean, we've been in this transformation for a while, but it's still relatively new. It's still in the early innings if you're a baseball fan, right? And so for patients and caregivers to have confidence in that and be confident that we can live up to the promise of better data-driven healthcare decisions in patient care, we need to make sure and be stewards. And not just technology, people need to be stewards, but all other executives who are not technical also need to be stewards of that. In order for people to be confident, we have to secure things properly, right? We need to make sure that privacy is protected, that privacy data isn't leaking, that the risk of someone tampering with data or tampering with the instructions to an implanted medical device that those things are bulletproof, that they're reliable, right? And that everyone in that value chain is not just thinking about the business outcome or business risk. Everyone that's a business person in that value chain is also a healthcare provider, right? And so in my view, all those people also carry around with them indirectly the Hippocratic Oats, right? Be stewards of the data, create confidence in the new models so that we can all move forward and get the better velocity, the better scale, the better decision-making, the better insights that come with that. Linden, I think one of the things I have to thank you for is I know how much time you spend every day actually helping a lot of those senior executives be curious about these topics and really investigate and really help them understand kind of what are those options, what are those trade-offs. So thank you for getting us to where we're going to be in five years. And Andrea, bring us home. You have been so interesting in this whole panel. So I had to give you the last word. So please, five years, tell me what to expect. So in five years, I hope that we will have been more thoughtful in some ways than we have been to this point about the question of metrics. So we talk about innovation progress, we throw them around as buzzwords, but actually the definitions of whether we have a shared view of what progress is becomes relevant. So legally, the word progress shows up in the Constitution. But as a technology sector, broadly speaking, the question of what that next generation better world looks like is one that we don't spend enough time talking about. I don't think we're all on the same page. I think we're on very different pages in some cases of what that world looks like. So progress, the word progress, if you look at the philosophy literature, it has three components. The first is a normative claim about what is that better life. And so we start there. And then we get to the social science claims. That's the second piece. So what do we need to figure out in order to build that world? And the third piece is the implementation specifics. And we need to focus on each of those sets of questions in order to achieve some sort of next generation successful technology society. So that's the first thing. The secondary concern that's included is the question of whether we're building technology for people to help them achieve that good life or whether we're building technologies that use people toward other goals that are not necessarily about building that better world. And as a kind of quick shared homework assignment, I think we might all have fun with some beverages of our choice to start rewatching episodes of the Jetsons, which was one of my favorite shows growing up. And as I watch that show now through the eyes of security, I realize that what I remembered being kind of a fun utopian next gen tech world is actually a little bit of a dystopian hellscape where George Jetson gets sucked in by his treadmills and robots malfunction and people get ejected out of their flying cars all the time. So going back and looking at the lessons of history and thinking through which pieces of that model were good and which pieces of our historical models have been not so good and analyzing technologies with that critical eye, I think it's really important. Super quick story, 1933 World's Fair on one side of the fairway, there were incubators that were privately funded and the story's fascinating. I go through it in a future article but just TLDR, they were privately funded incubators that charged admission to let people see the babies but did a fantastic job anonymizing the identities of the babies to the point that some of the babies when they grew up didn't even know that they were in those incubators. Meanwhile, hospitals at the time did not want to save those babies because they were deemed to be not worth saving. So you had this doctor question mark, his background is 2BD saving kids on a race-blind needs-blind basis. Other side of the fairway, the eugenicists who subsequently were commented on as inspiration by Nazi scientists for quantifying human bodies in very particular ways and leading to better baby contests and other things that history looks back on with reproach and scorn. So I'll leave the social commentary there but just on a very concrete legal point in terms of evolving law, which is the question the company has always asked, how do we anticipate where the law is going? If you build with an eye on that better society, that better world and you try not to skirt into the regulatory cracks but instead just build in line with the paradigm that any regulator could potentially cover you if you think about it in terms of not causing harm. The medical ethic of do no harm. If you live by those words you won't have regulatory problems in terms of the general direction and decades later we have a very different approach to say incubators which are a state of state of the art now and incorporated in regular settings and they're highly regulated by the FDA but nevertheless the goal of saving those babies and doing it in a way with attention to detail and care remains even though the regulatory system is very different today than it was back in 1933. This was amazing. Thank you all. I think one of the things that I'm walking away with Andrea from your your talk is the idea that we have to be curious. We have to be proactive but we have to we have to do no harm, right? We've got to be as good as we can be together in all of this. So thank you all again for those of you watching. I hope you are just about to jump in with a million and one questions for this panel because they are awesome and they are fearless as you've seen and happy to take on this subject because we've all got to be curious and we've all got to figure this out together and we all have to do good. So thank you all so much for coming today. Thank you to the panelists and we will see you soon for Q&A. Bye.