 Hello and welcome to this wonderful discussion on a subject I think which literally impacts our lives. Transforming medicine and redefining life, life as we know it and humans as we know them today. I think it's not just that medicines are transforming us, we are also transforming medicines. So the discussion today would be about what is happening, how is it happening, what are the guardrails, what should be the guardrails. How do we prepare for these changes, what is good and what is perhaps something that we have to be careful about. I'll be joined by three experts, leaders in the subject. Let me quickly introduce all three of them. Neeta Farani, Robinson Oeverett, Professor of Law and Philosophy, Director, Duke Science Society, Duke University, USA. Megan Palmer, Executive Director, Biopolicy and Leadership Initiatives at Junk Professor, Department of Bioengineering, Stanford University, US. Kuldeep Singh Rajput, Chief Executive Officer of Bioformics, USA. And I'm Pranthil Sharma from India. We're going to begin. I'm going to start with you, Neeta. There is a lot of exciting change which is taking place, but while we'll discuss the examples of that, there is also some anxiety. There is excitement, but there is anxiety. When we look at issues of digital twin, personalized medicine, we look at issues of data of our human bodies floating somewhere in the metaverse perhaps. How do we prepare for these changes where we should be looking for a better life, but a lot of people want a longer life? How is that playing out? It's a good question. Hi, everybody. I'm glad that you're here with us this morning. I am an ethicist and a futurist, and what I really focus on is how we ought to think about emerging technologies, how they impact our lives, what some of the risks are, and really how do we maximize the benefits of technologies for humanity. And one of them that I've been focusing on a lot recently because of, I think, this extraordinary intersection between advances in artificial intelligence and machine learning, advances in nanotechnology and engineering has been the developments in neuro-technology. And I think the promises in neuro-technology are extraordinary for everything from really addressing some of the root causes of human suffering from neurological disease into generation to mental illness, but of also unlocking a lot of the secrets of the human brain. Part of what I think is really exciting in that space is the coming age of wearable neuro-technology. So rather than implanted neuro-technology, neural interface as sensors become much smaller, as they become much more integrated into multifunctional devices. So if you can have an EEG sensor in each ear as part of your ear pods, where you also take conference calls and you also listen to music, but you have brain wave activity that is being monitored all day every day, do we suddenly have Fitbits for the brain that enable us to be able to track brain health in ways that we haven't been able to over time. People are very used to quantifying their own different areas. You go in, you get a cholesterol test, you have a blood test. People use now watches that have ECG sensors that track their heart rate. They're very familiar with their heart health, but there's very little up until now understanding of the human brain or self-reflection on it. And I think that that will be extraordinary, also a little bit terrifying, because as we realize that we can track and decode a lot of what's happening in the human brain, it opens up significant ethical risks of who is using the data, how are they using the data. It also challenges our own self-conception. You think that you are a morning person. You're here because you believe you're a morning person. You believe you work most productively first thing in the morning. You believe that you're more focused after you have a cup of coffee. And the brain metrics start to tell you that actually it's your worst time of day for focus, that you are the least productive, you have the most mind wandering, or you start to see things like cognitive decline that's happening over time and a slowing of your mental processes. How does the quantification of brainwave activity outside of our bodies where we can reflect on it change our conceptions of self? How does it change how we think about who we are? How does it change how other people think who we are? So I'll just start there as a teaser to open up the conversation. Thank you. Megan, the phrase bioengineering is so amazing. I'm an economist and I know that a lot of scientists are here in the room and watching us online as well. So let me ask as an economist, what does bioengineering mean? Does it mean that we create new people out of spare parts like we do with vehicles and others? Well, first of all, wonderful to be here, especially with such a packed room of science enthusiasts. So I'm at Stanford University now in our department of bioengineering, but I also work across the university with some of our ethics, society and technology efforts, as well as with our Institute for International Studies, really looking at the ways that advances in science and engineering are shaping our world, including the ways that we relate to each other and also to ourselves and our shifting environment. So it's great to be here at Davos looking at many of these questions. So some of us may be new to bioengineering at Stanford. This is the newest engineering department. The last department that we opened was computer science. And so you can imagine the impacts of this new field that are being anticipated and invested in. As we look to what is the next revolution in our ability to solve many problems, not only in health, but also in other areas, right, in manufacturing and climate change and beyond. But certainly I think the advances in medicine are some of the most exciting. So my own PhD was in bioengineering and specifically looking at how we understand and engineer our immune system. Our immune system is fascinating, right? It has capacities to detect and go after things that it has never seen before. And now through fields like synthetic biology, which is the area I've primarily focused in in the last 20 years, we're developing the foundational tools and knowledge in order to be able to engineer our biological systems, right? The most sophisticated technology on the planet to be able to do new and useful things, including potentially reconstituting life itself from those basic components. And so we're looking at the types of advances, like can we, again, re-engineer our immune system to now be able to precisely go after cancers, right? That's one of the most exciting areas where we've seen dramatic advances. But also how can we open up new interfaces with our body, not only in the brain, but also in the gut? Can we have sort of real-time surveillance and markers of the types of things we'd like to see? So it's really quite dramatic and very exciting. And, you know, you introduced another phrase which some of us would find it new, synthetic biology. I'm going to come back to you on that. Okay. Kuldip, you're using a lot of artificial intelligence. And I think, I don't know if it's happening, but it basically means that artificial intelligence is doing the clinical trials. It's figuring out what's right and wrong with us. And it also means that scientists and researchers are very worried. Is that true? Yeah. So I think there's always both sides to the coin. If you look at the advancements, what are happening in the industry today, specifically, you know, the pandemic really accelerated two big trends which happen worldwide today. One is how do we deliver care? And these are complex care. Acute care in the home, post-acute care in the home, and bring that hospital into patients' home. And that trend we have seen significantly advance over the past few years. On the other hand, you mentioned clinical trials. How do we virtualize a trial? It's not easy, you know, to run a clinical trial. Maybe there are a few percentage, like 20 to 30 percent of trials which could go virtual. And we have seen 10 percent growth in clinical trial going virtual year on year or the past four years. So if I can interrupt you, Kuldeep, what does going virtual for clinical trial mean? Yeah. So, you know, let's take an example, a cardiology trial. If you want to run a cardiology trial, typically, you know, every three months or six months the patient comes back to the site. Or the hospital does regular lab tests, you know, maybe imaging and keeps doing that or a two years duration. All the monitoring is very episodic. You know, you're getting four or five data points every year or during the visits. And when I say trials are going virtual, there are a few big benefits to it. The first one is how are we able to continuously monitor patients using sensors, like Neeta mentioned, to gather all the data, use the data captured, you know, to be able to analyze any complications the patients might have or side effects the patients might have more rapidly versus every three or four months what might traditionally give us very limited information. Second, when trials are going virtual, all the home, you know, visits what traditionally would happen in the hospital are moving home. So labs in the home, medication in the home, infusion in the home, radiology in the home. So patients don't need to technically go to a site and, you know, take their measurements. And what this really enables, you know, to the pharma companies and to all of us is how can we get drugs faster to the market? You know, decentralized clinical trial or virtual clinical trial significantly reduces trial duration, thereby giving us more richer data to be able to get products out to the market. On the flip side, the challenge with decentralized trial is also, you know, industry is struggling with patient recruitment. You know, how can we enroll patients, you know, faster, and do we just do it in a site? Do we just do it in a specific region? But what needs to really happen and really needs to scale, you know, globally is we can enroll patients from anywhere. And what that enables is diverse population in the clinical trial, you know, and different demographics, which all of us require today. I think that's a great point, because digital clinical trials are also about focusing on specific communities, which you often don't get with the diversity. And this we saw, and I can give you the example of India that when we developed many of these COVID vaccines, some that were developed in India were good for Indians and the ethnicity, because there's diversity even within India. And some of the ones developed in US, they had to be tested in India, whether it would work on the people living in a different context. So that's, I think that's advancing it a lot. Nithya, I want to come back to you. You're a futurist and an ethicist. One of the questions which I've always found fascinating is about not just precision medicine, but predictive medicine. So where it's happening is that if you have information about a person, perhaps a digital twin, where you have all the information about how the person's heart, lungs, liver work, you can know what's going to happen to them. There are some ideas around that. That has an impact on insurance, that has an impact on social connects, it has an impact on your employment. Somebody can look up your CV and find the data. I don't think you're going to be around for the next three years, so I'm not giving you a job. So it sounds facetious, but it can have very serious implications. It's exciting. But again, how do we prepare for this? What are the God drills that we need for it? Yeah, it's such a layered issue, right? So once we can predict the future, and as a futurist I will say, you cannot predict the future perfectly, right? But once you can probabilistically and through modeling be able to much better see what's going to happen. Take for example, the fact that we already can start to see signs of Alzheimer's many, many decades potentially before a person starts to manifest the condition. Do they want to know? And if they don't want to know, should other people have the ability to know? Should an insurance company be able to make choices about whether to cover them? Should an employer have access to that information to make decisions about whether or not they are somebody that they ought to employ? A lot of people and a lot of different organizations that I work with struggle with questions around genetic predictions. So particularly for highly penetrance, meaning it's very, very predictive that you'll likely develop the disease, take a disease like ALS, for example. But you don't know when. So you have incredibly high prediction, but very little sense of when the onset would be. How do you counsel somebody about how to integrate that information into their lives? Whether or not they should do genetic testing, what the implications for their family members may be as well? Because if they have that particular gene, that particular mutation, it may very well be that their children have it, or it may very well implicate whether or not they decide to have children to pass that along to their children, whether or not it could be corrected through synthetic biology, whether or not it's something they would want to correct. So as we have these developments, thinking about how do we make sure that people are prepared for the information that's being developed, but also that society is prepared for the information that's being developed. As an ethicist, what I try to do is to both educate people about the broader set of implications to help them think about the broader set of implications. You want to go undergo genetic testing? Here's why genetic counseling may be valuable. And the broader set of social and psychological and other issues that you may encounter, even financial ones that may shape how you want to think about it. I also try to work with corporations and governments and international organizations to help to define the principles around how that information will be used and governed in society. Should we make it off limits, for example, for an insurance company to have access to that information about individuals and to make choices about them, whether to cover them or to exclude them? Should employers have access to that information? Should the individual have access to that information? Should it come direct to them? Does it have to go through a trusted intermediary? We have to be in an ongoing dialogue. It's very difficult for laws and regulations to keep up with the pace of innovation. But that doesn't mean that social organizations, international organizations, nonprofit organizations, non-governmental organizations and the corporations can't be continuously asking the questions and addressing it as the developments come along. You mentioned corporations and I'm going to turn to you on that, Megan. You know, the whole idea of precision health and also you mentioned synthetic biology also raises the question about who's going to invest in the research for this. We know there is a history of pharma companies only investing in those healthcare issues which can give them the maximum return. When you know that, well, 100 million people are going to need this particular medicine, so let's invest in it. We also saw and I think COVID taught us that the Western world had literally given up on vaccines. It was mostly in the emerging markets and therefore they were not investing it. And that's why India, for example, is the largest producer of vaccines in the world because we need it. The African continent needs it. Many of the emerging developing economies need it. My question then to you is, when you have precision medicine, will it increase the healthcare gap because only those who can afford it and only a few people who can say, well, just make that special cocktail for me and I don't need the cocktail you had last night, but it will create problems. There's going to be a cost investment configuration which people will have to figure out. Well, this is again a very multi-layered topic and it's where we have to realize that we have a choice. We have a choice in the types of ambitions and targets that we set for ourselves. As we look at the frontiers of science and innovation and medicine and their impacts in society, we need to have ambitious targets about the types of technologies and opportunities that we open up, but we also need to couple those with ambitious targets in terms of equity and the types of experiments that we need organizationally in order to see what works, both in terms of outcomes, in terms of the health effects, but also in terms of the trust, right? The trust between communities that helps us to ensure that the types of innovations actually inevitably have the intended impacts both in times of stability and in times of crises and new types of health burdens emerge of which pandemics are certainly a large one. So, a lot of my work over the last 20 years and certainly over the last dozen or so have been involved working between public research institutions as well as a number of different private entities and working with the government, in the U.S. and governments around the world around how we think about funding and organizing science and medicine in order to do exactly these things, right? Be able to deliver the types of innovations, but figure out the financing around it, how to incorporate these ethical issues that appear at all stages of discovery and innovation. And we actually just had a whole other panel just before this on fostering scientific collaboration across borders. And there what I've learned is that really we do need to try many different models and the particular model might be different in different contexts, but what's key is committing ourselves to that goal where there might not be the market today but if a few public leaders stand up and say we are committing in order to have impacts not just on sort of personal and individual health but public and societal health, then I am very optimistic that we're going to get there but it won't be easy. Thank you. I'd like you to keep your questions ready, please. After a quick intervention from Kuldip, I'd like to come to you and get some of your ideas and thoughts as well. I have another simple question for you. Will artificial intelligence improve equity or reduce it? Because we are looking at many such situations emerging, Kuldip. The challenge is, as both of them, Megan and Itha just said, these are serious complex issues. Is technology going to resolve them or make it worse? I think my perspective is it's going to resolve it and I'll give you a few examples to illustrate that. Today when you do big data, you have a lot of data and you try to build population level models, you always have biases and you have when you use AI but from my perspective, what AI can really do is be able to provide personalized care to patients. I'll give you an example on one of the major problems in the US. Let's take heart failure. One in four patients come back and get re-admitted within 30 days after discharge. Less than 1% of patients with heart failure in the US as well as worldwide are on optimal dosage and that accounts to 60% of the reason why patients come back and get hospitalized. $160 billion wasted every year. The question was, when we started building and tackling heart failure five years ago, we said, okay, there are two, all these problems, there are two ways we are going to solve it. First, can we predict heart failure exacerbation ahead of time so that clinicians can intervene early so we reduce these re-hospitalization? And second, is there a way, once we detect, because we don't want to just stop there, we wanted to go a step further and say, can AI accurately identify the right precise dosing for the patient so that we can get the right dose to the right patients at the right time, eventually improving cost or reducing cost? So we proved that in a number of clinical trials. Today we manage almost half of the entire heart failure population and what AI could really do is capture continuous biomarkers from patients passively in the comfort of their home, use all of that to detect or predict clinical exacerbation and we were able to precisely dose. The biggest challenge for us was when we went to the FDA and regulators, there was a big question around, okay, this has never been done before. How can AI automatically dose a patient? Every patient with heart failure, 90% of them, in fact, have multiple comorbid conditions. So imagine the complexity in terms of accurately dosing. Of course it was, it took a lot of convincing, a lot of clinical trials and this was the first time FDA after we ran the trial granted first ever breakthrough designation for a software and an AI which can accurately solve the problem. What level of accuracy are we talking about? Yeah, so specifically if you look at early detection of heart failure, around 90 to 93% accuracy. But for the dosage, I'm worried about the dosage. You know, the way we reduced that issue or we sought out that was we always had, when there was a level of accuracy issue and we felt that a clinical oversight is needed, we had a clinical oversight. So you still need the humans, right? We needed the humans 5% of the times. Oh, gosh, okay. We needed the humans. You will not talk about the 95%, but let's take some questions from them. Yeah, but I'll finish with one point. In medicine, I don't think AI is going to replace physicians and nurses completely. And like the quantification. Yeah, nurses and physicians have to use technology to make them more operationally efficient. And that's how we are going to see industry evolving over the next 5, 10 years. Fair point, because actually even in a hospital room, just checking whether the liquid deck are going or if they're ending their sensors, which can tell and alert the nurses' desk that something is not happening, please rush into it even before they physically come and check. So I appreciate your point. So let's take some questions. We'll have a mic here, please. Right here, the gentleman. Yes, if you can raise your hand so that the team can see you. Kindly introduce yourself. Thanks, Jeff Richards, AIO Foundation. I'd like to connect all your parts because they're all interesting to this. You mentioned sensors. Sensors are the future. Digitalization is the future. So we have a new sensor to replace X-rays in bone fracture healing so you can see what's going on exactly all the time. It's autonomous. You can tell the patient directly to their smartphone how they should load, how much they should load, so on. Economical benefit of that is that people don't have to go in the hospital all the time for X-rays. They don't have the check-ups. It's telling the doctor when there's a problem at the same time. So this is also very good. It's collaborative. It has to be throughout everyone in the world. All the insurance companies or the legal approval companies have to work with it. And finally, cool deep with your area, AI. This is the nice new bit, thinking that we'll be able, if we can push FDA to do better legal approval on this, all this information can go in for an actual clinical trial AI without having to have huge costs for the nurse studies and so on. And it really would save a lot of money. And sensors, that's just an example. You could do this everywhere. So digitization is really the massive and it's coming as a future. Let me add one thing just in response to this, which is agree. I think that was a really nice connection between the three. One thing that was a little chilling to me to hear last night at dinner, a really terrific, futurist, Amy Webb was speaking. And she mentioned something that had not really been on my radar, but I think should be on all of our radar, which is the coming possibility of deep fakes within medicine. And so as you think about the cybersecurity issues between people who have sensors at home, X-rays at home, information traveling between the hospital and individuals, you know, mobile devices, and the possibility of deep fakes and generative deep fakes, you know, you have problems with the fracture healing. You don't have problems with the fracture healing. There's a lot of potential risk that we have to address. And that's just one of the kind of areas where if we're thinking carefully about this is exciting potential, we're all incredibly excited about the possibilities of transforming health. We need to make sure that we're attendant to the risks of it so that as we start to adopt it, we're integrating additional security measures, additional ethical guardrails on it to have it really help us in the best possible way. That's a great point. You know, a few weeks ago, India's premier medical institute was hacked into. We still don't know who or why. I mean, we have some ideas. I can't mention them now. But the lot of data of patients could have been not just stolen, but altered. Yes. Yeah, so part of my work, I spent, I guess, seven years at the Center for International Security and Cooperation in which I was actually one of the few scientists working with a lot of political scientists and security experts really working across domains and just wanted to emphasize exactly this. Along, you know, every stage of innovation, we also need to look at what could go wrong in a sort of constructive way in order to build new tools, but also to do these types of scenario developments that really stretch our thinking about what's possible. And underlying a lot of this is, you know, how do we engineer trust and integrity into these systems at all levels? And just one last note on it. I think this is an area where we can put our, to work our science and innovation system, not, you know, treating these things as things to be sort of circumvented, but actually treating them as science and design questions unto themselves. And I've seen so many different communities really being motivated around exactly asking those questions. Quick intervention from you. I'll probably add one point to what you said. You know, X-ray and other things in the home, one of the things which we all need to be aware of and we have been seeing that healthcare is too fragmented with a lot of point of care solutions. You know, health systems, for example, you know, are using 10 different vendors, 10 different solutions to solve a single problem. But eventually what matters is how do you deliver a holistic care to patients in the home? And now the question really is, who is that going to be? Because payers, you know, pay for outcomes. For example, radiology in the home, not, you know, can you have a single care-at-home platform which enables management of patients throughout the care continuum, including acute care in the home, post-acute care in the home, transition the patients to chronic care, and, you know, tailor the levels of services you require in the comfort of their home. So, you know, yes, you know, it all depends on patient outcomes, you know, clinical benefits, operational benefits, and how do we improve, you know, economic benefits. So that's what payers care about. That's how, you know, regulators look at it. And, you know, all of us in the industry we see and we will continue to see a lot of consolidation happening. A lot of point of care solutions will be integrated. We'll see a lot of consolidators already emerging and that trend will continue in the next 12 to 24 months. We could effectively have an ERP for our bodies. All right. So anyway, any other questions and thoughts from the audience? I think everybody is stunned into silence. Yes, the lady right here. Thank you very much for exciting discussion. Milena Sokolowska from University of Zurich. I have a question about AI because AI in medicine so far as I am concerned cannot compete with AI in other parts of the industry. For example, you will never get so much data sets as Facebook has and so on. And we know the algorithm are better when you have bigger database. So, and the same algorithm, the same AI is used in the food industry to make people heart failure because they eat too much and the food industry want them to eat too much. So how you compete with that? Yeah. So I think, you know, there is, there are two different things which we need to, or, you know, the industry need to be aware of. In healthcare, as you said, data is limited, but the quality data, annotated data is very limited. There's a lot of data, but how do you, the kind of data you need to do the application what you are looking for is limited. However, you know, in consumer world or other industry, they are solving, you know, multiple problems using AI. In healthcare, I think it's important that we all, you know, focus on one single thing what we are doing and the problem we are solving. For example, radiology or imaging, you know, AI to be able to accurately detect certain things works. And we have seen that over and over again, multiple regulatory clearances by the FDA for that piece of technology. So I think it will evolve. What is extremely critical, you know, and something for all of us to think about is health systems, especially in the US, think about data as their core asset without having an ability to share it with people, get it with companies, you know, who could then utilize it to build, you know, solutions. So how are we going to access data? How are we going to get access to long-term, longitudinal data with highly accurate annotations is extremely important for us? You know, our biggest focus has all been about how can we early detect certain clinical complications in patients? We are not using it or we are not using AI to diagnose something. And it's not going to get there immediately at least anytime soon. As I said, if we can, you know, reduce the false alarm burden rate, you know, of any alerts, we would be able to improve operational benefits and increase the nurse to patient ratio significantly. And that's one of the biggest benefit of, you know, AI we see today. But being able to accurately diagnose, you know, we are not there yet. So to build on this a little, because in some of our sessions here at Davos on AI and on investing in AI, they remark that this area in health and in precision medicine is some of the most exciting. And certainly we've learned that quantity of data, but also quality of data and being able to append data sets together and annotate them is a significant challenge. But we've already seen dramatic advances, again in genomics and the ability to then look at, you know, the vast array of knowledge and how to predict targets and how to predict more than one target, you know, in a cell at a time so that we can recombine them in new and dramatic ways. And this ends up being also at the social scale or very personal, right? I have family member who has a rare genetic disease and the data set to be able to actually detect that, right? As a particular change in the genetic code that delivered this was not available, right, 10 years ago. And so even though there are these challenges of how to develop these data sets, they offer really dramatic discoveries. And we're seeing investments in some of these other types of everything from large language models and other types of AI tools being unleashed within biological data sets in ways that can now do things like predict the folding of proteins with amazing accuracy that in many ways we couldn't have dreamed of. So the closer you get to people, right, and social systems and economic systems and otherwise the data gets even more difficult to manage. But if we start at looking even just at the molecular scale, there's dramatic things that are possible. I want to come at your question a slightly different way. I want to ask all of you a question, which is I've talked about how I think that wearable brain sensors are coming. We don't have very good brain data sets right now, particularly of healthy individuals with continuous monitoring over time. How many of you will willingly share your data, your brain data, continuous monitoring of your brain? You didn't ask me with whom, right? But okay, so this is a group of scientists about a third of you raised your hand. How many of you would be nervous about sharing your neural data? Yeah, about half of you. I think part of the problem is a people and a social problem, which is we haven't created a system of trust for people to confidently share their data and not fear that the data will be misused against them and also to believe that they're part of the return on investments of sharing their data. We have commodification of data that's happening by big tech companies, as you mentioned. We have commodification of data that's happening in a lot of different sectors. We all in this room understand that the only way we're going to get to the tremendous insights that we need in health, the only way we'll get to the tremendous insights that we need to be able to address neurological disease and suffering is if we can actually build large, rich datasets associated with a lot of our other behaviors and information. But that means that it's a social system problem of designing the world in a way that enables us to confidently share our data. Where it's not about access restrictions to data, it's about minimizing the harms of doing so and maximizing the benefits to all of us of sharing data. But that's why I think this is a social problem in health, is that a lot of people treat it and believe that it's really incredibly sensitive, which is a proxy for, I fear, some form of dignitary harm or other kind of misuse of my data against me. And if we can design our world differently, which we can, so that that's not the fear that people have, but there's actually a benefit that they see societally for humanity to share their data, then we'll get these insights. AI will transform and revolutionize healthcare and what it means to be human, how we treat health and our bodies and our longevity, but only if we can confidently share our data. Yes, I think there are two questions, the three actually, things are picking up. So let's collect the questions. We have six minutes left, so I'll request you all to be brief. Let's collect all the thoughts and then come to the panel. So I want to make a comment on the future, but looking at the past. Look, the ancient human being invented glass. He had a glass in his hand and it was a bottle of glass, he could drink, it was clean, it was great. Then one crazy guy took the glass, broke it and killed somebody. Wow, that's great glass, now it's just threatening on human being. What the society did, we are all as a human being, found a way to discipline, to control, to regulate and we used a good use of the glass. The same challenges came during the Industrial Revolution and again human being, so what I'm trying to say, we as a society must regulate, do more advance, to improve our life and confidence, do another, some bad guy always will come, the crazy guy that kills him always will come, then we need to find another way how to improve it and how as a human being we leverage the value and not be afraid of those changes. Thank you. Sorry, can I just collect the questions because we have just five minutes, I'd like everybody to get some thoughts. Thank you. Can you introduce yourself? My name is Martin Stoddard, also an AO Foundation. It was also a similar thing, I think there's a theme coming about trust and the angle I was interested in is how can we, as the problems and the solutions become more complex and more difficult to explain, so how can we build societal trust when at the same time we have misinformation where basically someone can quite simply say, oh, a vaccine will make you stir up and that's a simple solution which lots of people then become fearful, how can we as a scientific community tackle this societal trust issue? That's a great point. On the other hand, and then the lady in the front. Hi, Monica Weinberg, I'm an internist and I'm just, thank you for a great discussion. I'm really excited to hear about the monitoring with heart failure and I guess in hearing some of that I'm just wondering, I'd love to hear if there's any other, like in the pipeline practical applications like things, in terms of monitoring that like might be on the forefront, for example like diabetes, development of atherosclerotic disease, things like that. Thank you. One last point from here in the front. I'm sorry, we're running out of time. This is going to be a great discussion but they will be available here for interaction later. Hi, I'm Sanjita. I work for the WHO in India. I wanted to come back to the question of equity. When will these advances be available at affordable and at scale to developing countries? So what I'll request three of you is to, you know, give a one-minute answer to all the points. You can freely choose whichever you want. I'll start with your question. Acute care in the home or as you know, internist ED in the home started emerging during the pandemic or just before the pandemic. As you know, emergency rooms were flooded. Every health system in the country was looking at freeing up beds. So we had to build a whole care model where we are able to bring emergency room and deliver that same quality of care in the comfort of patients home with the same level of safety. And we were able to show that and that involves over 60 different kinds of diseases. Heart failure, COPD, asthma, pneumonia, cellulitis, UTI, like diabetes, multiple comorbid conditions. And that's where things like the point of care solution, radiology in the home, imaging in the home, IVs, all need to be delivered. And what I'll also add is, because it was such a big need, CMS and the reimbursement agencies started reimbursing for acute care in the home and recently there was a two-year extension. So this will continue to evolve and there will be multiple diseases, multiple comorbid conditions and how can we bring that holistic care to the patient in the comfort of their home seamlessly is extremely important. Thank you, Megan. I would love to address the comment about, you know, technologies that can help and harm and how we navigate those. That's a lot of what my research group spends their time on. And what's interesting is we have the capacity to couple science, technology, innovations that can help us do things more safely or at least monitor the systems and knowing their state. But also we need to couple it with social and behavioral approaches to engineer at this social scale as we discussed before. So my research group at Stanford has folks coming from bioengineering but also coming from social psychology and anthropology and economics in order to really look at how do we, again, engineer on both of these levels because it's not going to be one or the other but also how do we design those systems to anticipate the things that will go wrong and try to prepare in advance. So there's a lot of science and innovation to couple here. Thank you. It's hard to close out a session with these big questions but I'm going to address the issue of trust because the way we get to confidence in the systems, the way we get to really harnessing the power for good for humanity is by building a system of trust. I think there's just two small pieces of this that I will say we need to be addressing now. One is transparency. There are errors, there are limitations. We can't over-hype science, we can't over-hype the benefits. We have to be transparent about the limitations, whether that's the data sets, the bias, the errors, anything that we see we have to be transparent about. And the second is we have to realize that it isn't just about a communication or a one-way dialogue from scientists or the rest of the public. It's a bilateral conversation. The patient, the individual's lived experience is an essential part of the conversation to figure out what is the beneficial application, what are the things that people want to use. Great, we have sensors. Do people want to wear them? Is it something that is comfortable for them? Do they want to share their data? If not, why not? How does it feel at night when they sleep with it? And I say that because as long as we're having a bilateral conversation continuously over time, as a society, through this process of democratic deliberation, we will both build trust, but we'll also have the use cases that are beneficial for society and the ones that society wants, not ones that are imposed upon them. Thank you. I'll close with just three words. I think what we all could agree on is that as science advances, technology accelerates it, we have to remember security, trust, and equity should be the key pillars that would define the new building blocks of advances. Thank you so much for joining us, and please join me in thanking the speakers.