 Hi everyone. Today we're going to be talking about redefining patient safety in the digital era. My name is Deena Mendelssohn. I'm director of health policy and data governance at Elector Labs. And I'm Jen Goldsack. I'm the executive director of the Digital Medicine Society. When Diane Diller, a woman featured in a 2019 Washington Post news article, downloaded the primacy tracking app Ovia. She used it for much the same reason at the apps of 10 million other viewers to track her pregnancy and the health of her baby. She did not intend for that information to be shared with her employer, but that's exactly what happened. Think about what's involved with making a baby. Think about how you feel if your boss got the details. It's not just embarrassing for women. It puts them at risk for pregnancy discrimination and adverse effects in the workplace. But first, some level setting. Jen will walk you through exactly what we're talking about when we talk about biometric monitoring technologies. She'll also explain why, despite a whole DEFCON talk designed to point out the safety risks of these products, we still believe that these technologies are critical for patient care and clinical research. Jen will also cover our biometric ethics portion before talking to me. This got the shaky ground for data rights in the United States. After that, we'll get to the fun stuff. If you consider a serious safety violation to be fun. Fantastic. Thanks, Dina. Let's dive in and think about why healthcare is different in the digital age and why digital health at all. If we look at the financial information, we see huge possibilities, huge promise, and a lot of noise in Silicon Valley about the value, the financial value of this new and emerging industry. In fact, even throughout the course of the pandemic, the first half of 2020 yielded some of the greatest investments in digital health. Is it all about the money? Absolutely not. I think that the financial investments that we've seen are a product of and perhaps an indicator for, but certainly aren't the reason that we're passionately pursuing the safe, effective, ethical, and equitable use of digital health technologies. Let's think about the healthcare environment that we've been in for the last few decades. Let's think about some of the persistent and pervasive problems that as a community, and not just as an industry, as a whole population that we face as a result of these challenges in healthcare. In 2018, U.S. household spending topped a trillion dollars. Now, we know that healthcare spending writ large here in the U.S. is over three trillion dollars, but kitchen table conversations, what folks were individually being asked to write a check for to preserve their own health was over one trillion dollars. Let's put that into context. Three trillion dollars total healthcare spending here in the U.S. It's more than the GDP of France. This is an enormous industry and we are paying a price for it that we cannot afford. And despite all of this investment, our workforce here in the U.S. is dying higher rates than any other developed country. We're paying out of the nose for healthcare and we are not getting high quality in return. In fact, the number one reason that most Americans file for bankruptcy is healthcare costs. And the other week I learned it's the number two reason cited these days in divorce proceedings. It's the stress and the strain and the financial implications of a healthcare system that is absolutely unsustainable. And despite all of this, we still have these catastrophic diseases and illnesses that we are in no place to try and treat. There is not a single disease modifying treatment available for Alzheimer's. And there's certainly no cure off. Interestingly, for folks who don't know, the gold standard that we use in evaluating Alzheimer's, whether it's a loved one that you're helping get treatment for, or whether it's in a clinical trial setting, is a document for the ADAS cog. It's a 30-something page paper document that physicians fill out. It's a blunt instrument that's helping us in sort of preserve the status quo. And in the absence of technology, we're really going to struggle to make progress. And the challenge is, even when we do have terrific interventions and treatments for chronic diseases such as diabetes, one in four diabetics in the US ration their insulin because they can't afford to pay for it. This is an unsustainable status quo that we're in today. We have our warfighters coming home from service. And due to PTSD that we cannot treat when they're in rural areas, there's a really sort of a terrible maldistribution of folks who can help them and the patients in need. Our American heroes are dying for preventable reasons. And finally, even the clinicians who are at the front lines have suicide rates at the highest possible level. This is a system that is broken for absolutely everyone involved. And so I would say to you that we're not here to talk about technology today because it's cool. We're not here to talk about technology because there's a ton of money in it. We are passionate about the use of these technologies to try and solve these problems. They're not going to be a silver bullet, but we believe that they can make a difference. So what we want to discuss today with this idea of a new definition of patient safety is for all of the promise that these technologies have, there are new risks that we need to be eyes wide open to to make sure that we aren't creating a new laundry list of difficulties in the digital era of healthcare. So let's actually talk about for a second what we need when we talk about digital health. As I said, the industry is enormous. At some point we envisage digital health being a bit like computers and business. They're going to be ubiquitous. So if we're talking about a $3 trillion industry, where are we putting the different digital components? So we would offer up a definition that the use of digital and the support of improved lives to improve health all fits into the bucket of digital health. And that includes things like communications technologies, record keeping systems, so on and so forth. Within that, there's a more specific category of digital medicine products, which both measure and intervene in the service of health. So that may be sensor technologies, that may be algorithms, that may be different tools and solutions that support the administration of active molecules and traditional devices. And then there's a very specific subset of the field that we would describe as digital therapeutics, where the software itself is the intervention. That's a new and rapidly growing field, and I think an important component for us to keep in mind. For the purposes of mine and Dean's thought today, we are going to focus on biometrics monitoring technologies or biometrics. And we define those specifically as connected digital medicine tools and ones that rely upon a sensor for data capture and a single or a suite of algorithms for processing that information to give us measures of a person's health status. And why are we so interested in these biometrics? This is a great visual that I like to show. If we think about how we deliver healthcare in current state, if we think about how we do clinical research, we're relying on these really isolated data snapshots that you see at the top of the graph. The possibility created by these biometric monitoring technologies is that we can have a much greater, much more holistic look at a patient's lived experience with and without disease, with and without a given treatment. And that's why we're so keen to focus on this because we see this as sort of the tip of the sphere in really addressing some of those really pressing healthcare challenges that I shared with you earlier. And this is a bit of a silly cartoon, but I don't, because it's important to have some levity. But I wanted to talk about how decision making actually works in healthcare today. Oftentimes we think about a risk benefit analysis. It's wrong to say that we don't tolerate risk. It's wrong to say that we don't tolerate safety issues. And I think chemotherapy might be a really good example for anyone who has battled cancer themselves or has sort of been on the journey with a loved one. We all know that the side effects of chemotherapy is often awful, from hair loss to devastating sort of sickness to weight loss, so on and so forth. There is a risk benefit calculation that goes into the approval and prescription of those drugs. If we are in a fight to save your life, we are going to tolerate some safety issues, some quality of life issues because there's a lot at stake. Now, we would never apply that same risk benefit to a treatment for a common cold. For example, we can't treat a common cold, which we could, we might be having more luck for coronavirus, but there are certainly some drugs out there that can help you with the symptoms of a snotty nose, of a sore throat. We would not tolerate any safety issues from those sorts of drugs. Why? Because the risk of the cold is low. Few people are going to have any more serious issues than some pretty unpleasant symptoms for a few days. So it all comes down to that risk benefit. The benefit of chemotherapy is we can save your life, so we tolerate higher risks. The benefit of a common cold drug is a little more comfort for a few days. We aren't going to expose you to any additional safety risks. And so I thought that might be a really good way just to sort of round our conversation of decision making in healthcare and why and how we might tolerate safety risks to an individual. And so Dina and I are going to be handing off to each other. I'm going to talk briefly before I hand back to Dina about sort of biomedical ethics. This is a huge topic, an emerging and important topic. And certainly I'm not an expert, but there are some themes that we should all be aware of. And there is some groundwork that we should have a common understanding of before we think about how digital effects help them. So for a healthcare practice to be considered ethical, it has to sort of reflect four principles. The first is autonomy, where individuals have autonomy of thought and intention when they're making decisions. And a really good example of this is in the informed consent process. So if you're entering into a search, you need to know all of the risks and potential benefits and you need to truly understand them. And then you're asked to provide a written consent. This is also the same for anyone who's had surgery, for example. You always have an informed consent before you go into surgery. You need to know the risks and you need to knowingly opt in to participating in healthcare. Justice, I think, is really important. It helps us think about the burdens and benefits being distributed equally. So let's think about that in the context of digital. Let's think about if we start to implement the use of these biometric monitoring technologies, are we ensuring justice for folks who don't have access to broadband? Are we ensuring justice for folks who can't purchase them out of their own pocket? And then how about, you know, when we start to do research using these things, if we find great outcomes using digital technologies, can those be broadly applied to the full population if we aren't able to get everyone access to these tools? Beneficence, which is a word I often stumble over. So I nailed it today. That's like a sort of net good and a bit of a sore point for me right now. The intent of doing good for the individuals and populations involved. So for example, wear your mask during COVID, that is a nice move there. It's a pretty sensible no brainer, in my opinion, at least. And then the last one, crikey, another one I often stumble over, non maleficence doesn't harm the individual or others in society. And this goes back to that really important principle of harm in the context of benefit. This is never a black and white decision. And we always have to think about context. Again, there are some examples from COVID, I'm sure, sort of come quickly to mine for folks. And the way this is codified in the training of positions, at least, is the Hippocratic Oath. So all graduating positions, where the Hippocratic Oath, when they finally get done with and declare victory on their passage through medical school. What's important to know about the Hippocratic Oath is it never actually says, but first do no harm. That is not present in the Hippocratic Oath. And I think I'm going to probably bore everyone, but it's really important. It comes back to this idea of risk benefit. And that's codified in the Hippocratic Oath. It's much closer to the ethical principles than it is first do no harm. And something we follow at the Digital Medicine Society, and something I know that Dina and I talk about often, is the great work done by iron recovery. Members of this community have really thought carefully about the new risks of associated with security when we're starting to use connected devices, either for monitoring or for healthcare intervention. And what we'll talk a little bit about today is if this community has extrapolated a vision for harm and a risk benefit analysis to include security considerations, what are the other considerations that we need to be thinking about as we move into this digital era of healthcare? And with that, Dina, I'll hand it back to you. Thanks, Jen. I'm going to dive right into Health Policy 101, where we're going to start first by explaining the difference between privacy and data rights, so we don't lose anybody from the top. What we've always thought about is privacy tended to be something as a yes or no. Can you have this information? Yes, you can have it, no you can't have it. But as we get further into data rights and data governance, we start to realize that the question isn't as much yes or no, but it's more about context and who. So when we talk about data rights as opposed to privacy, the choice to share or not to share our data is context-specific and varied by the individual. A woman may feel comfortable having her reproductive health lab share information with her partner or her healthcare provider. She probably won't feel comfortable having that information shared with her boss, or maybe she will. Maybe that's something that should be up to her. And so that's why we're moving the conversation from one that uses the word privacy to one that's talking about data rights. As it is, our healthcare system has strong protections for patients' biospecimens like blood or glucose data, but protection for our mercury when it comes to digital specimens. The same can be said of data created by health apps, fitness trackers, and all the other kinds of biomass that we're talking about today. Now, as Jen said, a bunch of times it all reiterates wearables, health apps, and in-home sensors offer great promise for affordable, accessible, equitable, high-quality care. But in the modern era, data rights have become a safety issue that can't be on the body. The digital health information that folks generate may threaten both their health and their financial welfare. And that's why we think it's so important that we're talking about it today. In the United States, legal protections are limited for data rights. US laws do not have explicit regulations that give consumers full control over how their data is collected, used, or shared. There also really aren't rules about when we have to delete that information and what exactly that means. Instead, data rights are limited to a patchwork of protection, which is basically a combination of a HIPAA privacy rule, which was never intended to be a comprehensive health privacy law. The FTC Act, Section 5, which was drafted a long time before biomass came around, and state laws, which are effectively the ultimate patchwork protection for consumers because you really don't know what protections you have, unless you're a legal expert or an extremely well-informed consumer. So now we're moving on to redefining safety in the digital era. And first, I'm going to give a kind of broad description of all the different safety issues that Jen and I are concerned about. We've seen enough headlines to know that there's a problem with how our data is being collected, used, and shared. When we see headlines like these, for many of us, it feels wrong, it feels gross. Jen and I want to explore that feeling and have more technologists, clinicians, lawmakers, and the public talking about why this feels wrong. The answer, if that feels wrong, because it's not safe. Women's jobs are not safe when their employer has access to their pregnancy information or plan to become pregnant. We're not safe when our insurer makes life or death decisions about our access to care based on data that may not even be right. In this section of the talk, Jen and I will walk you through a handful of data risks and related harms, not to say the technology is bad or should be avoided, but to start the conversation around data rights as an issue of patient safety. Connected health technologies have the power to do more than embarrass. The data it creates can impact how individuals experience day-to-day living and the world around them. When you share information, either passively or actively, you create information about yourself. One key way that data is used is by data brokers or data aggregators. Certainly not all data manipulation accomplished by data aggregators is harmful. However, data packets as a sort of health score can affect individual access to insurance while the algorithms behind those scores are hidden and could include inaccurate information. Now some of you may be thinking in your head that the Affordable Care Act means that these scores don't affect our access to insurance, but that's really only limited to health insurance. When you think about all the different ways that a person might want to financially protect themselves or their family, that includes life insurance, disability insurance, and long-term care insurance name a few. And all of those are allowed to use information created by these data aggregators. For decades we've allowed insurers to determine our access to insurance based on genuine medical information. Now assumptions based on unconsented data from opaque algorithms are changing how traditional underwriting is done, making it challenging for some to purchase insurance at a price that they can afford or even being denied insurance at all. Identity data can also impact lending and housing decisions. Studies from the American University Center for Digital Democracy and the National Bureau of Economic Research both report that consumer health data can be combined by data aggregators and used to profile and discriminate in the context of employment, education, insurance, social services, criminal justice, and finance. Clinicians and researchers may not be aware of how digital data can result in discrimination in these areas. Neither may you. But the type of data manipulation that done but this type of data manipulation can be done as a proxy to circumvent the U.S. Equal Credit Opportunities Act and it's not something that's going to be done transparently. However, it happens and that's a serious safety risk that we should all be aware of and consider. Additionally, geolocation information can be used for surveillance. Wearables that measure physical activity use technology that identify an individual's precise movements. While the data is rich information for legitimate use, it can also be used for surveillance and this is particularly problematic for people of color who are disproportionately the subject of undisclosed surveillance, data collection, and monitoring. There's other ways of course that even well intended technology can cause tangible harm to users. Have you ever had that trouble falling asleep, trying to relax into rest and all you can think about is the fact that you can't fall asleep? You keep checking the clock and you're still awake? Now imagine an insomniac using a sleep sensor to address sleep issues who, instead of getting helpful data and perhaps better rest, instead spent the night anxious and losing sleep over sleep data and ended up worse than they started. There's actually a term for this kind of harm. It's called othosomnia and I'm pretty excited I just pronounced it right. Technology can also cause clinical harm. Known as eiatrogenic harm, this type of injury can occur when errors in the data collection by biomets goes unnoticed. And finally, I'll point out that the use of technology in a trial can inadvertently skew results. How participants are recruited over technology can be deeply problematic, as can be technology used as an incentive to participate in a trial. Similarly, deploying technology that cannot measure all participants with equal accuracy will skew data results, even if it appears as if there's an appropriate sample miss. And finally here, some people, including former president candidate Andrew Yang, think that solutions to tech companies profiting off of individual data and collecting and sharing it freely, it to make that data cost something and give individuals a cut of the money. While that does play into the idea of data rights and giving individual control, this practice, which is commonly called digital dividend, for data rights like advocates like me, this amounts to pay for privacy. Survey after survey shows that consumers want more control over their private information and they don't want their information monetized. Yet ideas like this keep coming up. When digital dividends was on the docket in Oregon in 2019 and I was serving as senior policy counsel for consumer reports, I wrote to lawmakers on behalf of consumer reports to explain why we oppose such a bill. In essence, data dividend turned the basic right of privacy into a luxury out of reach for lower income individuals. Further, those who sell their data would receive compensation for something that can't realistically be appraised or is lost fully reimbursed. Now I'm going to turn it to Jen, who can have a think more about the safety risks with data rights. Fantastic. Thanks, Dina. I think in terms of recognizing the broad swath of risks associated with data rights and safety, I always turn to you as the expert on this. I appreciate you taking the lead there. I want to talk about some of the other principles that we touched on, too. I think a lot about equity and access. I've got two graphs up here on the screen. They're a little small, so let me explain them to you. The one on the left is from the Dartmouth Atlas of Health. This is an example of a specific health outcome. Red is good. In this case, that means that individuals with diabetes are actually getting the necessary vision screening to make sure that their diabetes doesn't actually lead to vision impairment and ultimately blindness. We can see up there in New England, they're doing a great job and in certain areas of the Midwest. Dark green colour is where that screening is not happening at all. Individuals with diabetes are not getting high quality care. On the right-hand side, you can see a map of households without internet. There, it's the red colour that's showing that there is not great internet access and blue that there is. Let's think about there's a really interesting new product on the market for the IDXDR, which is a tool that doesn't require a position. It's a simple camera. It takes a picture of your retina and it looks to see whether diabetes patients are struggling with imminent risk of their vision. If we're relying on the internet to administer that, we can see that in New England already, they have great outcomes and that's the place with some of the best internet. Yet, if you look at Texas, they're suffering with some really poor health outcomes and most of those households don't have internet. I come back to this idea of, are we going to use these digital tools to ameliorate some of the disparities in access to care, some of the disparities in health outcomes or are we going to, unfortunately, widen that gap between the healthy and the sick, between the rich and the poor? That's something that we have to keep front of mind and I would argue is a safety issue. We also intentionally, because otherwise I think it gets overwhelming very quickly, focused on violence, but I do want to draw attention to the fact that this digital era of health goes well beyond those remote monitoring technologies that rely on those in the fixed point or wearable sensors. There's been instances of using sort of care decision tools that were actually incorporating bias into care outreach. There have been other instances where we're actually defining whole new tracks of data as being important to health and I'll give you an example. There was this phenomenal study that was done by some researchers at Microsoft. 15% of pancreatic cancer patients can be identified by nothing more than their web searches. Dean has summarized the pretty loose and pretty narrow privacy regulations. Certainly nothing extends to your browser history. Let's think about how this plays out. This is phenomenal if we think about your only hope in beating pancreatic cancer, being an early diagnosis. If we can harness that information, this is a huge win. It's great for saving lives. But let's think back to some of the examples that Dean has shared with us. What if instead of this information going to your doctor or someone who would reach out to you to say, hey, I think you're at risk and I think we can save your life if we act now? What if instead it goes to an employment agency? What if instead it goes to a life insurance company? What if instead it goes to a mortgage lender? There is the possibility here of phenomenal improvements in the way that we care for humans and the cost at which we do it. There is also really quite terrifying risk about how we use this information and the harms that individuals could suffer. I have a bit of a flare for the dramatic and I'm very willing to cop to that one. But I don't think it's an overstatement to think about these violence and in fact digital health writ large. Are we going to recognize that or are we going to use it as a tool? Are we going to use it as a way to do low cost outreach to deliver personalized medicine to be as inclusive as we ever have been in the history of health care or are and hopefully better or are we going to use it to double down on the surveillance economy? Are we going to use it to increase the disparities in care across different races, across different socioeconomic status? I think that means in order to address this, we one have to be very intentional and two, as we discussed, we really do need to think about a new definition of the patient safety in the digital area. The definition of patient safety that the World Health Organization has proposed is up here on your screen. It's a lot of text, but what you can see is we're thinking mostly about the absence of preventable harm, reducing risk wherever possible, and collective notions given current knowledge, resources available and the context of care. So in fact, when we talk about redefining patient safety, I think this definition holds up quite well. What we really need to do and what was done very well by the Hippocratic Code, the Connected Medical Devices and I am cavalry is thinking about what is given current knowledge? What are our resources available and what is the context of care? As we move from data in a manila folder in a file cabinet delivered in a healthcare environment to the use of these biometrics, the use of search histories, the dependence on the internet to get information and to administer cures, how do we use our current knowledge about the risks? How do we use the resources available and how do we think about the context of care being now 24 seven because it's tracking your Google searches because it's tracking your smartwatch? It's not necessarily that this definition needs to change, but we need to think about how we advance what we know in this community about harms due to connectivity to the internet. And we'd love to learn from you guys during the Q&A about your thoughts, your ideas and any feedback on anything we've missed and how we can advance this agenda to sort of saving lives to reducing preventable harms as we move into this digital era of house. Thank you so much. And Dina, thank you for being such a wonderful co-presenter. I look forward to some great discussion. Thank you. It's been great.