 Welcome to our session called when your doctor is a robot. My name is Amy Bernstein, and I'm the editor of Harvard Business Review. I'm going to introduce our panel. At the end is Jodi Halpern, and I'm actually Jodi and I are pals, but I'm going to read your full title Jodi because I don't address you this way when we talk. Jodi is a professor of bioethics and medical humanities at the University of California, Berkeley. Next to her is Leif Johansson, the Chairman of the Board of AstraZeneca, and I guess you're based in Sweden. Next to me is Lisa Sanders, who is an associate professor at Yale, Yale Medical School, and I'm sure all of you know her from the New York Times Magazine column she writes. If you're not familiar with it, do yourselves a favor and give it a read this Sunday, it's just wonderful. Today, we're going to say everything there is to say about AI and medicine. No, what we're going to do is talk about the state of play, where we are with AI and medicine, where we're going and what the hope is, but also where the peril is. I'm going to ask some questions, we're going to get the conversation going, and then I'm going to turn over the microphone to you to see what you'd like to know, because these wonderful panelists are eager to hear what's on your mind and answer your questions. So let us get started, big broad question. I'm going to start with you, Leif. We'll start up here and then we're going to work down. What is the state of play with AI and medicine? Where are we now? I think we have a gradient that's going very quickly, and we're using artificial intelligence in research, and we're doing that simulating molecules, for example, into cells, immune systems, using genetic to do that. Well, all of that actually generates a lot of data, and I think the question right now is not whether that data should be used. The question is, how do we use it in the most patient-centric way? So if we start with the question, how do we best provide patients the best care, then we come into things like, how do we best provide the doctor, or potentially supported by a robot, in such a way that the patient will receive the best care as quickly as is ever possible? I think, and you know, my background is that I've been in pharmaceutical industry for decades, but I've also was chairman of Ericsson for quite some time. So if I look at technology-wise, you know, we have 5G connectivity, low latency, high bandwidths. You have cloud computing, such that you can almost get enormous computing power in the cloud for analysis, picture analysis, et cetera. And you have storage at very low cost. If I look at the opportunities that all of that actually facilitates, and then compared to other industries, I would say we are actually on the slow side of using that. So when I hear all of the debate on how of the perils of what might be, which we often discuss in the pharmaceutical industry and also in the healthcare industry in the broader sense, I think we should possibly do it the other way around and say, if we really want to provide patients with really good care, how do we best do that? Then I think we will arrive at the conclusion that technology and data, data aggregation at different levels, trust in the system, telling patients what their data will be used for and then guaranteeing that that's the case that's got to be. And there you have real issues. But I should not say that in general, the healthcare industry is at the forefront of using technology. There is more technology to be used than we are presently using. And I think we ask ourselves, we should think we should ask in our industry, we should ask ourselves, is that for the benefit of patients? And I would argue it's not. So we can use both artificial intelligence, robots, robots, people think about a clunky person with electrical eyes sitting in chair and sounding like a doctor. In reality, robots are much, much more wider approach like that. I would make the argument, in India as a good example, India is building 150,000 primary care centers to be able to supply good healthcare in different areas. China, very similar case. There aren't enough educated doctors and possible never will be. So the only way to solve primary care centers with low skilled people, for example, for screening purposes, will be with AI robotics and then be able to screen those patients in such a way that they can receive the right treatment at a greater facility, more greater institutions. We won't be able to deliver good healthcare without robots and healthcare data to the many people whom we would like in the world to have good healthcare. Lisa, what are you seeing? You're a doctor, you practice medicine. How does AI figure in what you do? Well, it's in the background. I mean, we have some things that are not really AI but I have like diagnostic software which is really more machine learning where they're aggregators of lots and lots of information. So I use that when I need it. But it's used in the background a lot. Mammograms, they're read by robots. I think they're called computer, or something assisted devices, computer assisted devices or assisting devices. So all of that already exists. They've replaced the ability to identify melanoma in certain things. So I mean, it's already there, sort of in the deep background. The thing about the way it exists now is it's very specific. I'm a general internist and a primary care doctor although I don't really consider that a low skill set area of medicine. You would if you went to India and saw what they have out in the rural areas in India. Perhaps. But so it's already here and it's already useful but I think the biggest barrier that I see is data input. If you don't, like how is AI or the robot going to get the data they need to actually be what I consider a doctor? I mean, it's not like radiologists are not doctors but the doctor that we go to, how was that data gonna come from the patient and go into the system? As doctors, we ask questions, we do exams, we evaluate tests. Right now AI doesn't have any way to do that. I mean, they might be able to ask questions but then they can't really assess the validity of the answers. They might be able to do a physical exam although it's hard to imagine. I think that AI will totally replace or at least augment doctors in a useful way when we have what Scotty McCoy had on Star Trek. So, but that's in the future. So I just wanna understand better. What do you mean by they can't assess the validity of answers? It's like they don't have the EQ they can't see when someone might be dissembling or what is that? It's not exact, I mean, it's not, you know, Dr. House used to say everybody lies and that's totally true and if you're a doctor you know that that's totally true and you know that if you're gonna ask a sensitive question you have to ask it in a way, you have to make the patient understand why it's important that you know the real answer and even then you have to accept the possibility that you're not getting the real answer but more than that, it's not just lying about how much you drink or how much you smoke or who you slept with. It's really about your own assessment about what's important. Like when you come in with what we call the chief concern now, that's the most important thing to you but it might not be actually important in medicine. You know, the column I'm writing about now is a woman who had this terrible diarrhea. That's why she went to sought medical attention. That was really, no kidding, the least of her problems. So, being able to assess what's really important, I think might take time. But you think it might happen. I don't think by talking to patients that AI is going to reach its pinnacle. It's really gonna be able to have to get information directly from patients. You know, I think the greatest piece of diagnostic software invented was a CT scanner because that showed us something that there was nothing we could do short of surgery to see that. When the machine is able to actually get data directly from the patient's body, that's when AI is gonna be able to at least compete with doctors. So, Jodi, what about what we've already heard now concerns you? Well, actually, both of what the both other speakers said I'm totally aligned with but I hear a lot of other things too. So, I'll pick up on some of the points they made and just one of the things, first of all, as the person who, I wrote a book years ago which is subtitled humanizing medical practice that was about 20 years of research on why empathy makes healthcare not just nice but more effective. So, I'm gonna talk about that in a minute. But I never thought when I came up with the subtitled humanizing healthcare that I would actually be saying it now because we're thinking about turning over the doctor. So, I just wanna say I think that's really interesting. And I've been, my book that I'm working on now is called Engineering Empathy because of the goal of having AI do it. So, I think we're all aligned with patient-centric care. So, let me pick up on those points. So, first of all, I've been talking about AI and ethics all week and the main thing I've been saying to everyone is what is, and others have said this, what is the problem we're trying to solve? What's missing in healthcare? And we talked, Leif was just saying that he's always, his company has to think of the patient first and Lisa's obviously thinking about the patient first. So, what from patient's point of view is missing in healthcare that we need AI to solve? And I think there's a lot of things but at least one set of things we've heard about is diagnostics and clearly AI is helping with that. But then the second part is that we have a 55% burnout rate in physicians. That's non-exaggeration and burnout's really bad. You get really bad care with burned out physicians and we have tremendous waste of time and everything else with starting years ago with physicians typing the whole time, my doctor does and having to spend hours every day putting their own data into the medical record so they're not really even paying attention in the clinical visit. So, there's a big problem of physicians burning out and there's patients getting not enough time with physicians. So, there's been this big claim that AI can free up physician time and I actually, it certainly can. Although even with reading radiologic reports and with breast cancer diagnosis there's very recent research showing that the best thing is to use AI with a physician expert. AI should serve as sort of the second opinion because both of them have a range of false negatives and false positives. And so, but I think they can really help a lot and they can cut out hours every day. And so, some people are saying, great, now doctors will provide empathy for patients. But that's the moment we're at which is are we gonna fund that? So, the second most important question besides what's the problem we're gonna solve is who's your customer? So, who are we doing this for? Are we saving money? It's great to save money and help patients but we can't forget about helping patients. So, my comment about what could be missing with trying to automate the relationship which is what I'm worried about is Lisa's point about diagnosis but also why does a relationship with an actual human doctor make a difference for effective healthcare? And I just wanna say there's three things that we have proven are essential to have a human relationship for. One is taking a medical history which is what Lisa's point is. People don't come in, 70% of primary care visits are not for what the patient says. So, and a lot of those are mental health things, depression, anxiety but the patient comes in because they've got a rash or an itch or a lump or something which are all important but the point is a general internist cannot take a history without getting the patient to tell them as much as possible. It's a little different than Lisa's point. We have really good research. We've wired up primary care clinics around the world and observed when do patients disclose information that they were too anxious to talk about and it's literally predicted by emotional attunement from the physician. If you've ever seen films of mothers and babies where they sort of have this nonverbal empathy that is the predictor of disclosure of important information from patients and garbage it and garbage out. If we don't get the histories, the diagnostic histories, we're not gonna make the good diagnosis essential for effective care. Second biggest predictor of effective care is adherence to treatment. 50% of all prescriptions in this country with people with insurance that have the money and the access to care, 50% of what's prescribed is not taken as prescribed and that's because of trust. We've just all empirically proven and the biggest predictor of trust is the feeling that your doctor has skin in the game that they're co-human with you and that they really are worried about you. That predicts half of healthcare could be more effective if doctors were trained in clinical empathy. The third thing, unfortunately, a lot of what being a doctor is about is not curative, is helping people deal with bad news and then coping with it and doing the best they can and we have very good evidence that empathic human relationships are essential for coping. Patients with bad cancer diagnoses, they sought out support and treatment much more quickly if the information, the bad news, was delivered empathically and I'll stop it, I'll just say we probably all heard about the Kaiser teleprompter moment where a patient was given very bad news and it wasn't robotic, it was an actual doctor but they were on a screen wheeled into the family and that created a lot of rebellion by patients around the world because of this sense that I'm positive from the date on this due, that when you're being given really bad news which is a big part of healthcare and you have to cope with it, that there's a need to feel that someone is co-human with you. Lisa, I know you have thoughts about this. I see you writing stuff down. Tell us what you think of what Jody has raised. Well, I do think that doctors do have to be able to be there for their patients and I think it's interesting that you call it co-humaning. I think partnering was sort of how I do it and I think that it's important for patients to feel like their doctor cares about them and it is true that only us people who have been able to master typing and looking at the same time are able to manage to see patients and get their notes but I guess I would disagree that we don't need AI to help us with the thinking part. That is not what is burning us out. It is all the other crap work which we call SCUT. I used to stand for something, I can't even remember what that is but it's all the tedious, not very important and yet essential work that we have to do that causes the burnout. It's interacting with EMRs that were never designed to help patients or help doctors. In the United States there were laws that required medical practices to get electronic medical records and there were very specific discussions about what that electronic medical record should do. None of them were focused on helping the patient get better care or helping the doctor have access to better medicine. They were all about, or sharing medicine information across systems. They were all set up to get better billing. You know, what Jody said is who were they designed for? They weren't designed for patients and they weren't designed for doctors. They were designed for the people who bought them the hospital systems and if you look at the healthcare system, the people who are doing the best are not the patients and they're not the doctors. They're the people who own the hospitals and the pharmaceutical companies. So, I mean, I think that AI has a lot to offer. I mean, I think of it, this morning I was thinking of it in terms of horsepower that engines are calculated in how many horses they replace and I think when you're thinking broadly in some ways we're gonna talk about computers or AI as how many smart doctors they replace or stand in for. So I think that there's a lot that's going to happen. We're just not, as far as I can tell, we're not there yet. We're not, as far as I can tell, we're not even close. And some of that is just the way the structure is set up so that it's not really delivering what we need it to deliver. So, and I would imagine, Leif, when you were hearing about Lisa's idea of smart docs, you're also thinking smart researchers, right? Absolutely. Now, I think Lisa sort of makes the point that you should not replace doctors with robots. I've never suggested that. I don't think we should suggest that. So let's leave that out of the discussion a little. Okay. And then let's start looking at again, again, if you're a patient and you're arriving and I'm very much focused now on building good healthcare systems in rural areas in countries like India and China, et cetera. There's not a good doctor available who can do all the good things that good doctors do. It's a matter of how do you actually get screened in such a way that over time, you can elevate yourself into a more hierarchical system but still in a system that over time can actually treat your disease. How can you take cancer diseases? How can you screen patients with very simple methods? And they are simple and they are rude compared to having a doctor available and do exactly all that you're suggesting good doctors do. But those doctors aren't there. So how do you actually do all of that and you come into a situation where you're giving good healthcare to at scale to many, many people? Those are the types of issues, I think, where we can generally use AI. Then I would say in many cases, when we talk about medical history, let me take an example of genetic sequencing code. We all carry all of that data with us and we give a blood sample that can be sequenced and it can be done over time. That's gonna be cheaper and easier. Why wouldn't we already at the time of coming into the emergency room, for example, have that available at a good electronic record such that we would get better treatment or screened better than we would be if the tests were made locally? Those are the types of questions, I think, that AI can solve in a much, much better way. R&D-wise, we are really talking about big data. Now we are talking about big population data, talking about genetic coding. We're talking about finding the right subgroups of patients for different medicines, even down to immune system levels or cell levels. And that's where you really come into using the computer power that we spoke about before. Right, right. Jody. Just real quick, because I want, folks, I just want to agree with all of that and even say, I have two large studies that I'm PI for. One of them is on whole genome sequencing of infants using machine learning. And so I totally agree, and the others are gene editing, but then the issues you brought up early today, I just want to say, because I was supposed to say something about ethics, that we all know from the whole week on AI that we've had here, that the thing about that is still, you still need the relationships because the reason to do whole genome sequencing for infants is to treat metabolic disorders of infancy. We're just simply changing the diet of a baby. A small dietary change can make the difference between them having terrible developmental delay and serious problems versus being normal. So it's a very humanitarian issue around the world. So, but the thing is we still need confidentiality of day and all the things we mentioned earlier, we still need all the ethical protections and the trust and all of that still has a relational basis. So the question is who communicates in around the world? Who is the advocate for the communities? I was talking to someone from the UN who works in this area last night and for women who will be participating in those kinds of studies around the world, they still need representatives who will have skin in the game, who will participate with them, whether that has to be physicians or other types of healthcare providers. But the research informed consent process has to not be bureaucratic, but genuinely community-based so that we're really meeting the needs. Again, who is, what's the goal? Who's the customer? It's gotta be those rural and underserved communities meeting their needs to protect their infants. And it's not just in India where the healthcare is erratically delivered. I mean, there are huge swaths of the United States where you could not find a board-certified internist for miles, maybe hundreds of miles. So it's not, I mean, if you can figure out a way to mobilize this, I mean, I think that would be a tremendous good. So what are the, but we have to think about governance because we're talking about very personal data and we're talking about the potential to overstep what would overstepping look like. How do you govern the AI-enabled delivery of healthcare at scale? What are the contours of that? I'll say one thing about it. I've been learning a tremendous amount this week about what is happening. A lot of it's confidential, we'll find out about it soon. But what is really clear is that there's tremendous difference between companies, between whether they put a firewall up, between information that, for example, there's a lot going on in creating ambient medical records, which I think I would say, I assume Lisa would probably, I don't know if you'd support. But anyway, there's a lot to be said for, again, not having to have the doctor type the whole time and having ambient information. And then that information is stored on various kinds of servers. Some of you here might be developing those. But there are companies that believe that if it's on their server, they own the data. And there are companies that realize that it's, and if it's not, I mean, there's protections. But then there's companies that wanna blend that data with all the stuff that's not protected, legally, at least in the United States, all the stuff you get from your smartwatch. My favorite example is smart mattresses, which I hope you all realize, your mattress, if you have a smart mattress, it knows everything about you. It knows if you are having sex, it knows if you're sleeping, it can diagnose depression and everything else. And so there's so much data with internet of things and everything else in the health sphere. And so it's not just our medical record information. So there are companies that see this as a tremendous opportunity to basically target advertising to people. And that's very problematic in terms of, and also it isn't even just the private sector. I mean, it's not just the for-profit advertising, but also it's private, but the insurance companies. If your insurance company can know everything about your health, we won't have insurance anymore. I hope everybody understands that insurance is based on not micro costing out what's gonna happen, especially with genetic information and everything else. So it's the end of a social commons completely. So we have tremendous need for, I think, legal protections to answer your question, Amy. I think we have to have laws. And some major company leaders in these areas have said to me, we don't just need regulation, we need laws. Let me add a little to that. I think if you look at data and data protection rules, I think right now, regulators are going about it at too high a level. Say more about that, what do you mean? Then, for example, if you take medical data record on an individual basis, those should be protected for that patient. And they should be done so by law or regulatory proof. The fact that they can actually be then put into different hierarchies of data. And we had this discussion the other day. There is actually, we have always said that if you have genetic profiles, they need to be individualized. And they are individualized, of course, that's what they are. Can you actually derive a higher level of aggregation and use that for data, for example, in research? And so far, the answer has always been, no, it's very difficult to do that while protecting the integrity of the patient. I think we actually now have technology coming in computer science that makes it possible to aggregate data, hold on to the individualistic records, but still be able to aggregate a higher level. And, for example, use for healthcare planning or for that matter for R&D. So I think technology is coming at us very quickly. But we are trying to address, and I've been part of many discussions here at Davos, we are trying to address it to some kind of a general level that Facebook should be compared to electronic records of medical data. That's not gonna work. Well, it doesn't work at Facebook either. I have to do this, very, very, very specified and we want, and we need to know. That makes it easier, frankly, if we set ourselves to do it, to try to solve it that this general level is not gonna work. So before we go to questions from the audience, Lisa, I'd love to hear what you think about this. I love the idea that our patients' experiences can be sort of elevated and used to help other people. And I'm sure that all of the patients, all of us hope that whatever is hard for us can somehow help somebody else. But we all know also that this information can come back and bite us if it's not protected dramatically. The way the protections are set up now is kind of dumb, at least in the United States. We have this HIPAA law and what it means is that there are administrators everywhere that wet their pants with the idea of sending somebody's record someplace where they don't have all the paperwork right. And well, they should be wetting their pants because it's a felony. So, I mean, that's a dumb law. I mean, the way it's set up wasn't very well designed but it was a good idea. Those things need to be protected with really strong laws and I just worry that in some ways we've made medical information that's very limited thing and I think it's a much broader thing, especially if we're gonna start telling, including what our mattress tells about us to our insurance companies. Now I'm really worried. That's why I'm saying we should define what is the data that is truly useful for a patient-centric healthcare. And that possibly won't include many of the things that your watch might tell people who you connected to. But I think the worry that we have that this might be negative or that it could be misused is right now stopping us from doing things that could be very beneficial for patients and that's what worries me. I do think, well, but if you exclude some things from what is medical information, does that mean it's unprotected and people can just scrape it off the internet the way they take your picture and now all of us as a permanent, I don't know if you saw this deep fake thing but the combination of the deep fake exhibit and this story about that company that scrapes all of our pictures off the internet has me a little bit worried. So if you don't make it medical information, is it just gonna be scraped off the internet and used against us in some way? Because people are worried about that and they have reason to be worried. So I think we have to- They have reason to be worried, but that's exactly why I'm saying we should not try to solve that problem with people scraping pictures off of Facebook at the same time that we try to do good research in genetically diagnosable diseases. That's, it's simply too complex, won't work. So we need as a profession in healthcare we need to define what we think can be useful and again with patients in and then see how that can be protected in such a way that the data can be useful for the healthcare sector. That's our task. I think solving Facebook at the same time is gonna be too much. Well, I think we agree that the patient has to be first but I think we might disagree with where, what kind of first we're talking about. I mean, if you're talking about in big picture the individual has to be first, which I think is what you're saying. I think we might disagree because I actually think that the actual patient in real time needs to be first and we don't have to think about what would be useful to know and how can we make sure that this person isn't screwed by it, by our getting it. But I think we have to think about both of them at the same time. I agree with that in real time, but I also have the perspective of providing good research and development over a long period of time to provide better healthcare. And that's not real time with each individual patients. That's actually a long-term perspective of collecting data in such a way that it can be used for research. Pharmaceutical industry is obviously interested in that but so are many, many healthcare providers, diagnostics, device manufacturers, et cetera. Can we use that in such a way that over time we can aggregate data from real time and I share your concern there. Can we do that in such a way that it becomes a meaningful asset to be able to provide good, longer-term research and development? Then I would, that's what I'm arguing, I think. Okay, well let's go to you and see what questions you have. I see a hand over there and there's a microphone. Thank you. So I'm listening to the discussion that you're going through now and it's very interesting, but you seem to be approaching it from a traditional healthcare alignment, so doctors, patients, medical care, hospitals, insurance, but that doesn't seem to be where the AI doctors are going to be coming from. You're still looking at it sort of in the old paradigm. To my sense, the idea is the doctors, the AI doctors are going to come from the tech sector. You're going to have the large tech companies, Amazon, Apple, Google, with all of their health information that they're gathering from our wearables and our personal devices, our smart mattresses and whatnot. They're the ones who are going to develop the AI doctors. Skip over the entire step of the old healthcare system where you had to actually go and seek healthcare and they're going to bring that directly into the home one-on-one directly to the patient, skipping out the old way entirely. So how does what you're talking about work if that's not the main threat of the AI doctor? Yes, AI will help the physician as she exists now, but that's not the AI doctor that I'm worried about. I'm more of the one that totally goes around the entire healthcare system entirely. Well, let me get a shot at that. I think, first of all, I think you have, in most countries, you have regulatory things to make sure that Facebook doesn't come up or Amazon doesn't come up without having sufficient skills to be able to do what they're suggesting. But that's a good point to hit on here. I think they're gonna have to, frankly. I think you... They may have to, but they don't now. But I think there's another point which is actually more intriguing, I think, in what can be used with the type of data that we were talking about before. That's really before individuals become patients. Because we are dealing with them when they are patients in the healthcare industry right now. How do we actually prevent and possibly predict, but especially prevention? There are no regulatory boundaries that stops Facebook or Amazon or anyone to come up with good prevention programs depending on your personal profile, on your awareables or anything like that. So I think there is a long, long area before, a long time before individuals become patients, and we are discussing when they are patients. But this whole area before, how do we actually give people good advice? That won't probably be with the healthcare sector, and I would say unfortunately. But it might be with Facebook and Amazon. So two scary things about that. I feel the same way, and I think you're right. Just because of my projects in gene editing, so you don't have to do it through the healthcare system to get genetic information, right? Everybody's doing 23 and me and all this stuff. So that's in that sector you're talking about, that's not HIPAA protected. And so that not HIPAA, not only can companies now know all this stuff about you that could affect your insurance, your job and everything, but the other part of it is there's a whole field now. Maybe some of you are in this field, but I'm fairly critical so far of, it's called predictive phenotyping, but it's basically taking some genetic information about people and predicting all kinds of traits. And first of all, the science is pretty shaky still, and I work with the experts in this, but that could get better. But the use, it's already been used in China to persecute the huggers, because what that's doing is taking DNA, parts of DNA and predicting what someone will look like, what they'll act like, identifying them, but it's already being used by companies in Silicon Valley to basically predict the IQ of kids and all these things. There's a lot of eugenic and other potential to this. So there's so much you can glean from the non-HIPAA protected, you're 100% right, that the amount of information, and this is why Lisa said we can't really split it up completely. That's why we eventually, so it is interesting, because I don't want progress in bioscience to, I agree with you that one of the big ethical risks is not making progress where we can, but I'm not sure that we can make this go faster by only having laws protecting the HIPAA related medical information, even though I do wanna make progress because there's so much now, almost nothing's within that anymore. Nothing's contained to that anymore. Lisa thoughts? You know, until this week, I certainly, or until this month in any case, I certainly didn't think of the kind of information that you could get from a smart watch as medical information, but certainly thinking about it more recently, it seems like it's all of a piece about privacy. We have this Supreme Court decision which is probably gonna be overturned about that protects individual privacy in their bodily concerns. And I think that that kind of information should come under that kind of protection, that we should own all of it and people who want to use it have to pay for it. You know, I mean, it sort of reminds me a little bit about the issue about plastics and things like that. When you bought a bottle of Coke, you had to give a deposit because if you lost that bottle, you know, the Coke company would be required to replace it. So they use these deposits. Plastics came onto the world and had none of that. None of their future use or replacement costs or anything like that was taken into consideration. It was a huge loss and we all are paying the bill for that. In some ways, I think that we're at an opportunity now where we can protect, to me, like this all this human data that comes off of us that we shed in a daily basis is kind of like that. We ought to be able to extend, I mean, extend our reach to include all that in perpetuity so that what happens with our information still belongs to us in some way. Yeah. Is that crazy? No, you know, someone at Apple knows that I walked briskly from my hotel this morning. God, what are they gonna do with that information? So that makes perfect sense to me. Other questions? Yes. Thanks for the great conversation. Lisa, there's a reason why you combined the engineer's first name and doctor's last name, Scotty McCoy, it's Leonard McCoy and Scotty is the engineer in Star Trek. Oh, thank you. And the device they use is a tricorder. Right. 10 years from now, healthcare, medical care will look very much different because technology and medicine and healthcare, everything will be coming together. I think there's a real issue as to how it's gonna turn out. But at the center of it, the healthcare team and the patient will have to be the core of that, whatever picture is going to be. And the issues that we're dealing with right now, accessibility, affordability, care, the speed of diagnostic, speed of care, they are all very important issues around the world. In some places they are more serious than other parts, but technology from AI side, or so sensor technology, diagnostic technologies, we have right now, as I speak, very low cost, transportable x-ray machines available and being taken by small trucks to villages anywhere in India and other parts of the world. So it's going to look very different and we won't be using primitive devices like status scopes in this day and age, but at the same time the issues, the privacy, the protection, and how can we prevent further commercializing medical care and healthcare will be a very important issue. Thanks for this great conversation. Any thoughts about that, Lief? No, I think perhaps to your point there, technology, because the way technology is built, it scales very quickly and it covers the globe very quickly. Look at mobile phone usage is a good example of that. And because of that, I think we in the healthcare sector need to think a little more about ethics and how we are to respond to that because we won't be able to stop that technology wave. And I happen to think we shouldn't want to stop it, but we certainly would want to relate to it in such a way that the patients that we are treating in the healthcare sector in whichever roles we have actually benefit from that technology rather than the other way around. And I think that's really the critical question for the healthcare sector. Jodi? Nothing, nothing. I just want to let other people talk. Okay, maybe one more question. Yes? Physician, I have a physician as well. Even just in consideration of the current technologies that are out there that would be really great to have in my clinic, their ability to permeate into my daily practice is nearly zero. I mean, all the Fitbits are not on my computer. There's no ambient healthcare other than what I ask about. So my question is given even the challenges of today, what are the reasonable changes to the current status of health of hospitals and health regulation just to even get the near-term technologies integrated into a clinic? Lisa? Well, I think the first thing that has to happen is that there needs to be there needs to be a sense of value for that doctor-patient encounter, primary care. There's a lot of talk about it, but it doesn't really make anybody any money, and therefore it is literally not valued. And so no one is going to provide us with this information because unless they can see some value potential in that relationship, in that level of medical care, but it's not something that can really be monetized. We're talking about relationships. So that's why I think if people could be made aware of the value of that information, then maybe they would be more protective of it, and that would alone give it some more value. If they could transmit their own sense of value of how much their relationship with their doctor means to them, maybe that would elevate this in value as well. I don't know. I think we're up against it. So how would you lower the barrier? Tell me about that. I was previously at the Brigham, by the way. I didn't spend some time at Yale, but now I'm in Singapore, and so we have this massive data, it's 1.2 million no-corner, and their solution was to essentially help the court of all the more hospitable students. So I can't search anything on the internet. Okay, well. I can't hear that. I'm afraid we run out of time, and with that picture of uneven progress, and with the understanding that we need to think very carefully about both the barriers and the protections we're gonna need, let me say thank you to our wonderful panel, and to all of you. Have a good morning.