 Hello, deaf gone. My name is Andrea Downing, and I'm really excited and honored to be back here at the biohacking village for my third year. Today, I'm going to be talking to you about predictive algorithms and how they affect your health and your future. The name of my talk is no aggregation without representation. So let's get started. My talk has 5 parts. Why me? Why should you listen to me? I'm going to talk about predictive power of health data, harm and threat models, hacking algorithms, and how we think and act differently to protect patients. I want to start by dedicating my talk to a friend and fellow disability rights activist, Erin Gilmour, who took her own life not long ago. While she can't be here today to raise her voice, I thought I would start with her words. She said this in 2014. I come from the privilege of being an educated white woman and a background and a culture where it's okay to question and fight for what I want. I know the correct jargon, and after so much time in hospitals, I have enough knowledge to ask for and fight for my care. Some patients died trying to get food, medicine, housing and medical care. If you don't die along the way, you honestly wish you could because it's also exhausting, frustrating and degrading. I may not speak for all patient communities when I talk about predicting your health and your future. And I might not represent the entire patient population or the diversity of it. But my own origin story and path to becoming a hacker and starting on this road that got me to DEF CON began when I was very young and predicting my own future around cancer. I have a lot to say about one gene and the history of that gene, what it tells us about predicting your health. I never wanted to be on this path. I first learned about the power of data to predict my own future when I was 25. And I share this in every talk. I have a bug in my coat. I have a BRCA one mutation. What that means is my genetic counselors sat me down when I was 25 years old and told me that I had it up to an 87% chance of developing breast cancer and up to a 60% chance of developing ovarian cancer in my lifetime. That's me getting surgery, one of seven surgeries up at the top left. And my first for a into data sharing and disability rights starts at the bottom right here at the steps of the Supreme Court. When I served as one of the spokespeople media spokesperson for a plaintiff in a case that went to the Supreme Court on whether human genes could be patented. The case and the genes in question were BRCA one and two where I had a genetic mutation and the company that had done my genetic testing had a patent on those genes. Before that case, over 40% of the human genome had been patented. And after that case, the access to genetic testing and genomic tech technologies exploded. And we really are still only at the tip of the iceberg when it comes to understanding the power of genomics. My path took an interesting turn in 2018 after Cambridge Analytica, where I became an accidental hacker asking myself a simple question about the privacy implications of having a support group of genetic mutation carriers on Facebook. That's a whole other talk, my first talk at DEF CON in 2019. If you want to go check it out, it's there. And all I want to say here is from that experience. I found other patients and this incredible community here, which I'm very thankful for the other patients around me who have, you know, sought a path forward are working to advance our rights. They're also fairer representation in the systems and technology and data that affects our lives, our health and our future. We are a nonprofit called the light collective, and I'll talk to you a little bit more about our work at the end. But first, let's go into what my talk is about predictive algorithms are everywhere, not only in healthcare, but increasingly. They exist in ways that impact our health that we may not understand or realize because the choices aren't apparent to us. We don't truly understand as we walk through our day and on the internet, some of the ways in which we can be targeted, or how our identities and our futures can be affected by predictive algorithms. Increasingly, the choices that we have and the way we're treated both inside and outside the walls of a clinic are dependent on the data that we generate. And the choices that predictive algorithms make about us clinicians develop predictive algorithms by studying health outcomes of patient populations over time. That would be the like 5 second explanation of predictive algorithms. And, you know, that's very basic. It's obviously a lot more complex than that when we think about informed medical decision making, clinical grade diagnostics and making sure that those things are good and trustworthy for care. When we're seeking knowledge to make informed decisions. How are those choices really our own my view on this from the cheap seats after 15 years of lived experience navigating my own future and my destiny based on genetic information is that your choices should be your own. And what you don't know may kill you. I'm going to talk about black boxes, black box predictions about our health are killing us in ways we don't know. And the examples here, emerging for key themes, patient safety, inaccuracy and bias, privacy and security and lack of digital rights. Let me start with EMR is epic has about 20 proprietary algorithms designed to predict things like how long a patient might stay in the hospital. How you might get care and one specific example of a predictive algorithm is your chances of developing a serious condition called subsist. Well, this example I'm calling out and I want to just say that Epic isn't the only EMR with predictive algorithms. This is an industry wide practice and this just happens to be an example that just hit the news. So I'm giving that here an external validation cohort study looked at Epic's proprietary algorithm for subsist, and they found that 67% of patients with sepsis were not identified based on this predictive algorithm looking at the wider population. So that's very interesting. When we have a black box making decisions about, you know, are you going to stay in the hospital longer or not or what's your risk of re admission or developing sepsis. You might not be given the same care as somebody else based on the data that is generated about you. What are the sources of that data. How is it accurate and how is it worthy of decision making that is informed by you and your doctor. Another example on the right here is something many may be familiar with the opioid epidemic. What a lot of people don't realize is the opioid epidemic of the last two decades has cost here in the United States 600,000 lives. How did it start aggressive marketing while under playing the risk of overdose and addiction to drugs in the middle of the opioid crisis, a Silicon Valley startup had a really fantastic business model idea. They were called practice fusion. And the company developed an electronic medical record system for doctors and instead of charging for the software, like their competitors, the company generated the bulk of its revenue by advertising to doctors. Specifically practice fusion solicited a $1 million payment from opioid companies to create alert systems that would encourage doctors to prescribe extended release opioids with two patients. This has since been busted and is in a huge criminal case. The DOJ is involved. It's really messy and ugly. And the, maybe the result of that here is maybe it's a bad idea to have drug companies as a business model. Share with Dr. C and shoes or predict about a patient. And that's all I have to say about that. How are we solving the opioid crisis? Well, here's a shining example. You may be surprised. You may not be predictive algorithms. This is a fantastic piece by formerly political reporter. Now, the interesting thing here is it's not based on just clinical or EMR or study data in an IRB approved study without your consent. What's happening is data aggregators are buying up data about the food you eat the time you spend watching TV. The kinds of medications you've been prescribed before in and they are mixing that up into a soup of prediction and then selling this to providers in order to predict your risk. This is unregulated and it is used without patient knowledge or consent. This is Tammy Dobbs. She lives in the state of Arkansas and relies on at home care for years. The state of Arkansas gave Tammy 56 hours of care per week to do basic tasks like getting out of bed, going to the bathroom, making meals and cleaning. Then in 2016 the state implemented an algorithm that decided that Tammy could only get 36 hours of care. She had to ration that care and decide and schedule when she was going to go to the bathroom and eat and in this specific case, the designed algorithm was so wildly irrational that not even the developer understood how and why the algorithm made the recommendations that it did. And what does that leave patients when something like this happens? It leaves us with loss of care, loss of benefits, inability to get food back to the words of Erin in the beginning of my talk. And just ways that these systems can be used to charge you the most amount of money in order to give you the least amount of care. Now let me take it back to my little domain that I know a lot about, which is BRCA, that happens to be one of the most studied genes on the human genome now. If you remember, I told you those 87% and 60, like super high scary numbers back in the beginning when I was told my genetic risk. And you have to keep in mind when we talk about in four medical decisions, that was like a death sentence to me at 25 years old. And a lot of my path and my identity has been shaped by that sense. And what I've wanted are better options, not only for myself and my future, but my family and my community. And since the patents were overturned in 2013, I mentioned that we've expanded access on genomic testing and all kinds of technologies. And we're learning about new predictive algorithms that are very interesting, but still need to be validated in order for them to be worthy of clinical grade decision making. So these are in early stages when you think about those over 60, like science is starting now know there are 66,000 variants on BRCA one and two alone. And the way that we classify those variants are some are harmful or going to kill you or you might develop cancer. I don't want to scare anybody. Some are totally benign. And then in the middle, there are variants of uncertain significance. And those variants of uncertain significance. Some of them are so rare that we will never have enough data in the population to make accurate predictions, but they can still be harmful. The way that we are predicting these things are looking at, for example, in silico prior prediction to be considered with other evidence in partnership with your clinician. Or thinly functional scores and these are all like where on the protein that the mutation happen or if you look at a model organisms and that organism has the mutation. Did they all develop cancer? Like those are other ways that you can look at prediction of a health outcome without actually looking at the population. This is Dr. Lynette Hammond-Jerrito and Valencia Robinson. Lynette on the left, Valencia on the right, not pictured is Tia Tomlin, founder of MyStyle of Matters in Atlanta. And I want to share this as a good example of patients representing themselves in partnership with a researcher to tackle the problem of genomic disparities in BRCA and data sharing. What Lynette has done in this initial study is look at the many layers of the cancer disparities problem. We know from prior studies that African American women are 39 to 44% more likely to die from breast cancer than white women. Well, there are many layers to that problem, including lack of access to care, systemic racism in the healthcare system. But I want to talk about how genetic testing doesn't work for people of color. Because those genes were patented in 2013, what we knew at the time, what clinicians and the guidelines at the time were that BRCA mainly affects Ashkenazi Jewish women. It's breast and ovarian cancer. Those are the very, very extreme cases of cancer risk when it comes to the population. Therefore, only people with those certain markers were able to get tested. So back in 2005, there were just so many huge that you had to jump through to get insurance coverage for a $4,000 test and to get into a genetic counselor that was a six week wait time. Now, over time what happened was people with less access to care were not part of that original data set or the development of those algorithms. And now what we have is a problem where genomic testing genetic testing for BRCA literally does not work for black people. And that is because the systems were the, you know, the way that we have developed genetic testing is on reference genomes for white people. So we need data and we need algorithms that represents diverse populations. And my thesis here looking at this example is that needs to be led and supported institutional support for researchers and patient community advocates like Lynette, like Valencia and like Tia leading the way because that's the only way we are going to earn and build trust. This is their time to lead and it's very exciting. I promised I talked about hacking, hacking predictive algorithms. So here you go. Maybe not that hard. You think of it like, oh my gosh, she, you know, you got to do the thing and and get into the system and breach it. I don't even know if I said the right thing here, but here's a beautiful example of Simon Wackert, a German artist who took a wagon full of cell phones down the street in order to affect Google's algorithm on whether or not there was traffic. And I think this is a fantastic example will not in healthcare. It applies. These barriers to entry and how to hack a predictive algorithm are pretty low. Like you just give it, you just feed a bad data. There you go. So up until now I've really focused on algorithmic bias, where testing works and where it doesn't. And when we lose care and we lose trust, and we fall through the cracks of the healthcare system, where do patients and the public go? They go to social media. When patients fall through the cracks of the healthcare system, they turn to social media for help, support and resources. And I'm telling you this because I did it in 2013 when I was going through my surgeries, I found a beautiful brave support group before the days of disinformation and it was a lifeline for me. Finding people with a shared identity and a health condition is one of the most important parts. It has been for me and many studies on this show that it is a really important part of the healing process. And the question becomes, how do we guide that and do it in a way that isn't exploiting people or harming and abusing their privacy? And what happens outside the walls of the healthcare system that researchers and clinicians often don't think about when they don't have a TikTok account? Well, I will start with this fantastic study that was done in 2019 evaluating the predictability of medical conditions in social media posts. And this was in plus, it was roughly an end of 1000. And the posts analyzed through natural language processing was roughly a million posts, 949,000. And so what they did was they took all the posts, they looked at different types of medical conditions and correlated with a way to predict risk of diabetes, pregnancy, anxiety, psychosis, everything down to really personal things like whether you're not, whether or not you're likely to abuse alcohol, whether or not you have a digestive disorder. And you have to keep in mind this is an IRB approved study with consent from the people involved. And the power of these predictive models are very early. But Facebook has a lot more data than that. We know that when predictive analytics are used in ad targeting on social media, pharma companies are able to target you target you based on your health conditions. One might think great, more effective marketing, you can get to the right patient at the right time and really getting effective about micro targeting these populations. But you have to keep in mind that if you go back to 2018 we warned Facebook about the huge problems with their group architecture and the way that those were feeding recommendations into group recommendations and interests. Those things were basically swept under the rug and then a pandemic happened. I want you to think back to that practice fusion example where they had a business model to target ads to physicians. Now this is what I'm showing you outside the walls of a clinic where we just go around the physicians and target directly to the patient. And the key here is to know that there's no governance, no digital governance that is protecting populations, vulnerable populations from being micro targeted. These three ads are just three examples of what my community sees when we go on social media. And there are thousands more of these are just three examples of thousands. You can target ads bourbon ads to alcoholics. You can target snake oil and fake treatments to us. You can give a send out a scam of DNA based life insurance. To people with genetic mutations so that you can deny them care and and that is actually a real thing that happened. Between July and November of 2020, the health care and pharmaceutical industry in the United States spent $198 million on Facebook advertising. Instagram followed with 151 million and ad investments have just exploded when it comes to health care and direct to consumer pharma ads on Facebook and other places on social media. We need to stop just like shifting risks down the road and saying to people, oh, well, they should have known better than putting their health information out there on the internet. Communities rely on old and outdated tactics when we're responding to misinformation. You know, I have witnessed it time and again from the World Health Organization to the CDC to hospitals, putting up FAQs on their website, download this yet another app so you can learn about medical misinformation and get educated. And yay, we're going to be experts teaching you and that's not working. Obviously, we're in a place where that's pretty much played out and we need to try some different tactics. When I talk in just towards the end of this presentation, I'm going to give you a few examples of what I think may work. What we don't see is also a lot more dangerous. I'm going to give you this very important example of clinical suicide risk versus social suicide risk. And I'll point you to this fantastic piece from Mason Marks on artificial intelligence based suicide prediction and the Yale Journal of Health Policy Law and Ethics. We break down if you think about a clinical setting and traditional risk models for suicide, something as intimate as predicting your risk and making informed decision about how to navigate care. That's got to be a really hard thing, right? Well, unfortunately, in the traditional clinical risk models, the risk scores and algorithms that have been developed for suicide risk are notoriously inaccurate. This study breaks down these sensitivity and specificity and level of evidence for a couple of a few dozen different predictive risk scores that have been developed with IRB approval. And, you know, the gist of it is a lot more needs to be done before these are worthy of informed clinical decision making in a practice with your doctor. And at the very least, something like this should be in your own hands as a risk score that you could make decisions about on your own and be informed about. Rather than something that is a decision made about you. Well, it's starting to be a thing that we are able to predict risk of suicide based on what you post and it doesn't have to be that you intend self harm. This is a study on machine learning approach for predicting suicidal ideation on social media data. And it's, it's, there are a few studies like this right now in their early stages where this was just a participant group of 283. Because the study was on Twitter data. They felt it was important that to outline there was no need for consent, because the tweets were public. And that's an important thing. We know when we go on Twitter, any user knows there's no expectation of private privacy and what you broadcast is what you broadcast. But what you may not know is that could predict even if you're talking about something totally unrelated, the words you use. Those could be used to train machine learning algorithms on your risk of suicide and then who knows if they're accurate or not. Those proprietary black boxes can be sold off to make decisions about your employment, your ability to get a mortgage or a loan a student loan. Who knows it's a black box and that's a problem. And it begs the question, should your risk of suicide be a trade secret at all, or your risk of any health information, should it be a trade secret. I get that companies have to build business models on the software that they develop, but there are different ways to do this. And as we think about paths forward and ethical AI, this is a really important one. And it scares me because when we are aggregating the pain of people or making decisions about somebody without their knowledge or consent. It hopefully I've said it over and over again is a complete breach of our human rights and our privacy and it shouldn't happen. You should feel free to be vulnerable as you want to be on social media without thinking about what the implications are of what you post. And I'm going to bring up by the way that goes for any jerk on the internet who wants to share things that are misinformed people need the freedom to learn and to disagree and be wrong. And trust is completely broken. Maybe I have shared enough at this point to convince you that trust is broken in institutions in technologies and patients of the public know it. I personally have an interesting definition of trust that comes from Stanford Professor Lindred Greer and she studies teams and how teams behave. One of the things she says is trust is not a feeling. It is a measure of how accurately you can predict benefit or harm. And often I think we look at patients as an abstraction. We think of us as a market or as consumers or users with no rights, and therefore our voices don't matter. I think of patients very differently as needing to be representative in the policies, the technology and the field of cybersecurity. I think patients, patients like me who are activists are hackers too. They just don't know it yet. And if we build trust, we have to do so in ways that are accurate and measurable. What do we do and how do we do that? As early adopters of technology, the BRCA community and I have been hacked, scammed, sold patented by the last generation of innovators in healthcare. And I think it's time for a change. I hope I've scared you enough because now I'm going to get to some more lighthearted stuff and a path forward. What is my call to arms? Well, I'm going to channel Rosalind Franklin who was the original X-ray crystallographer who found the structure of DNA and then add that co-opted by Watson and Crick. And she said science and everyday life cannot and should not be separated. She also died of ovarian cancer. The truth is, though, when it comes to dismantling the problems that we share, these are all problems that the second you go into the walls of a clinic, these problems are going to affect you too. And I think that we have to start bringing together the experts and institutions by supporting communities of patients and their leadership with governance, digital governance. In order to create a structure like this to address these problems, I shared at the beginning we formed a nonprofit to serve patient communities and represent our collective digital rights. Today we think of informed consent. And okay, if I look at a 50 page document and I sign at the end, well, I am like about to go into the emergency room box checked, I've given my consent. Well, it just, that doesn't do justice to anybody and it is not sustainable and it is further enabling these systems of harm. So our mission is to help peer support groups, people with shared identities and health conditions to build capacity around digital governance and foster healthy human connections when it comes to representing our identity and our rights and health data and technologies that affect our lives. Heidi Larson, vaccine anthropologist, said it beautifully, you know, when we think about the problem of medical misinformation, we should be looking at rumors as an ecosystem, not unlike a microbiome. When instead today we're just continuing to go to here, go to this FAQ and educate yourself when people don't trust it, it doesn't add anything to their experience. So here's what I am doing. Take this, for example, is a picture of the BRCA community, the breast cancer community and brain tumor community, how we interact. This is a network graph and I'm learning how predictive algorithms work so that I might be a part of the solution. And for the past year, I've been working with this amazing, amazing open source project called Project Domino. In fact, Leo Mayerovich and Cody Webb are one of them maybe in our Q&A. So keep an eye out for that. But what we've been doing is working to understand how our networks are visualized and helping to inform the design of predictive algorithms when there are often false positives and false negatives. How do we become part of the solution and represent ourselves in the data or the algorithms that we're developing? I think this lends to a more important question in my mind, which is, are patient activists, when we think patients can be hackers too, patients like me can be hackers too, we just don't know it yet. Can we be a part of a digital community of health workers that are helping to counter the effects of medical misinformation and become stewards of fairer representation in technology? So little nerd out and I, Candy, this is my little Jupiter notebook repository of BRCA tweets and here's a beautiful visualization of some of the things that we've been playing around with. This is actually looking at disinformation networks and bot networks and understanding how to identify them. And when you're a patient community going through trauma and diagnosis, you might not know the first thing about whether a bot network is targeting you or not or whether that's real. And you might only see a small sliver of the problem where this is a way to think about of it as a living breathing ecosystem and how information and knowledge is shared in a visual way. What is my call to arms? I am asking those listening to understand that patient communities who are users of technology also have no rights when we are generating health data as the supply in the supply chain and those data and algorithms can either cure or kill us. We need digital rights. So this is a draft of six principles for rights that in 2019 we drafted with the help of patient leaders, disability rights activists, patients and policy and technology experts. I encourage you to go take a look at this on our website. This is very much in the early stages. It's just a first draft. I have another culture arms and that is to the people not above me who think of themselves as experts representing institutions or elite hackers who are just talking about hacking all the things. This is for the people beside me who are struggling to learn about the technologies that affect us and that we often struggle to trust. We need to start building core competencies in technology as we organize communities on social media and help them to engage not only with experts in the healthcare system but advocate for our digital rights. We are going to be piloting a civic trust. This is a legal model and civic trust law is nothing new. It's been around for 1000 years, but when we think about having a fiduciary responsibility to shareholders instead of users or patients. This is part of the fundamental misalignment of incentives and business models like practice fusion that I shared or in ad targeting on social media that can cause harm. So if we think about patients and leaders and experts as data stewards that can be part of a new model and define the purpose of how data are used and communities as beneficiaries of those data. I think we have an opportunity not only to create a lot of value in the design of better systems that improve health outcomes but also take a more networked approach to representation of communities and diverse identities that can be a part of a path forward. Finally, I am, I think I already said finally, but finally again, I am asking for you to partner with us and directly work with patients as colleagues and equals rather than research subjects and users. I am asking you when you think about diversity on your team in cybersecurity or elsewhere in the design of your healthcare systems. Do you have patients representing your team. If you don't you are part of the problem because we can help and we need to be treated not just as advocates being asked for feedback. We need to be treated as experts in our own experience and build the capacity and skills, not only in the path career path of cybersecurity, but in other places. Because when we do that, there are so many examples of where we have changed things, not only in the diabetes community with we are not waiting and open APS but also the think about the HIV and AIDS movement and how that changed the course of their disease. So, finally, a third time finally, I'm asking you to join us in rebuilding trust. You can follow on Twitter be like light check out the light collective. I'm going to be sharing how to get involved with us if you are on the discord channel. And I'll be tweeting a few things out on how to get involved. We are currently seeking applications for our first cohort of partners and we would love for you to be a part of that. Finally, I want to give a whole bunch of thank yous to the board of the light collective who are patients who have been working on this for two and a half years. Advisors are legal advisors are clinical experts and cybersecurity experts and health data experts. I also want to thank the incredible communities on social media like BCSM with Alicia Staley. I want to thank Robert Wood Johnson Foundation for for supporting our work and BRCA sisterhood American living organ donor fund digital public. My style matters and so much more. So thank you. And I'll see you in the Q and a.