 Welcome back to SuperCloud 4, everybody, where we're digging into the power of generative AI and how it's affecting industry transformation. And one of the industries that is most ripe for disruption is healthcare. Look, healthcare costs are rising. People are living longer. Clinician burnout is an ongoing concern. The quality of healthcare globally, it spans a wide spectrum. So AI, automation, gen AI in particular can help combat some of these challenges, but also it brings some concerns and some risks. Joining us now is Jose Pedro Almeida, who's the chief AI strategist and an expert in the healthcare industry. Jose, thank you so much for coming on the program. It's great to see you. Pleasure to be here. Thanks for having me. Yeah, you bet. Okay, big chewy question to get started. How would you describe the state of healthcare from a global perspective today? Well, I think we are facing a major issue which is the workforce problem. We are seeing that after the pandemic workforce has just dropped like by 30, 20, 30%. And that means a huge pressure to the system, to those that are in the field trying to treat patients because patients keep going up, physicians and nurses keep going down. And you need to introduce some intelligence to the system in order to overcome these issues. So AI obviously can have some real positive effects. We are enthused by that better personalization, things like faster drug discovery and my automation can help reduce cost. There's augmented diagnoses that are happening today. You think it's like proactive disease prediction, but as well, there's some concerns related to privacy leakage, misdiagnosis, maybe over reliance on machines. So it's a two-sided coin. How should we think about AI and healthcare in the most logical way to implement it? Well, first of all, I think that the role of some companies that have been selling this type of AI has brought some mistrust into the system, which does not help. I think first you need to go into the field, understand doctors and nurses, most fundamental issues, partner with them, change your language, start talking about Python and cloud computing and all that, and start talking about, you know, medicine-resistant stuff, phyllo-aureus, bacteria, and how do they treat them? Go into their fields, understand their problems, and then build the things with them. And that has a lot of layers, especially in large healthcare organizations, where you first need to capture all the data that is siloed in several information systems. You need to unlock that data, bring it into a centralized data platform, and then you start building the intelligence on top of that. But all that intelligence is built side by side with clinicians. It does not work going outside in with some solution and trying to plug. It doesn't work that way. You need to build it from the inside, build it with them. And I think that when you do that, the level of trust that you are able to achieve from them is usually higher because they know the operating boundaries of what you are doing. They will be the first ones advising you, let's do this in more back office areas. Do no harm to patients. Don't try to do the AI that will diagnose patients. That's super naive to think about at this time. You need to respect a lot of the way doctors resonate. It's usually complex. We know these new models are bringing new technology into the scene. But even so, there's some part of magical reasoning into all our doctor thinks that we need to respect and we need to build things that help them and that are co-pilot to him and not trying to replace the way he diagnosis because I think that's what happens soon. You know, there's an analog in IT for years we talk about, you can't talk geek. You got to talk wallet. When you talk into the business, you can't talk Kubernetes. You have to talk about patient care and patient outcomes with the doctors and nurses. So you sort of touched on this, but how are organizations integrating generative AI within current medical practices and models? And where are you seeing the immediate and most beneficial impact on patient care? Well, for the first questions where are organizations doing this? I don't see any examples worldwide. Maybe a few of them in the US, like HCA Healthcare is doing some partnerships with Google, for instance, trying to automate the nurse shifting moment and trying to bring gen AI into the game where gen AI can summarize what happened in the last 12 hours of care and send that information to the nursing team that is arriving into the hospital in order for them to be more proactively informed before that nursing shift and handoff meeting occurs. And you see those examples, but it's still starting. I think that at the board level people are starting to worry about this and wanting to have this, but there's also some road to go ahead in terms of the skills that you need to bring into the C-suite as well because this is not a management game. This is transforming the healthcare organization into somehow a technology company. And that's, you need special skills as well in the leadership team to bring that into the game. But in terms of the second question, how do I see the impact of this? I think the impact will be just huge. We are facing an inflection moment in time from my standpoint. And I think healthcare is probably the area that will benefit the most from this if you are able to plug it in to the operating system. Because if you look around healthcare data represents, I don't know, maybe 30% of healthcare data globally according to some studies, but there's a recent study that just came out signaling that 97% of, for instance, hospital data that is being produced is not used. And one of the reasons for that is that more than 80% of that data is in an unstructured form. So think about those clinical notes that are sitting in the database that are locked over there. They are just being used for transactional care between the doctor and one patient, but you are not leveraging all the insights that are already there and that you could just put an NLM on top of this and understanding patterns. There's so many things that we could do with this. Like I can give you just an example which I think really resonates in terms of patient safety. Just imagine that in the future, and when I say the future, I'm probably talking about the next two or three years, you can have a large language model that has ingested all those clinical notes inside the hospitals, thousands and thousands of them. And you can throw a question, like imagine you are a clinical director of the hospital. You can throw a question in a flash of a second which is a broad question thrown out in a natural language way, like tell me who are the doctors who are not following clinical guidelines in my hospital? Like analyze a thousand beds, a thousand clinical processes in a flash of a second and tell me, for instance, patients that are taking antibiotics for patients that are with fever for more than three days and are not taking antibiotics. Probably they should, probably there's a clinical guideline for that. And if you have a system that is able to do this at scale, the level of safety and efficiency you can bring into an organization is just unprecedented. And Jose, are the so-called guardrails inherently in place because unlike chat GPT, which is scouring the internet and Wikipedia and everything else, you're working on a corpus of data that is fixed, that's restricted, that's specific to a particular organization. So is it a self adjudicating mechanism in that sense? Or are there other concerns about hallucinations and things like that? Well, one of the things we need to take into account from the very start is that these language models that are available nowadays, they learned from public internet. They have not seen most of the clinical data and that's a problem. And we can talk about the existence of a public internet and a private internet. And when you talk about hospital data, you are talking about a private internet that is highly secured behind firewalls and all that. And so these models have not seen that data and doctors and nurses, they write in a certain way that you don't see spread out in the web. Like they do a lot of acronyms and all that. And so what I think is that you are also seeing that trend, like these language models are starting to be available with a lot less parameters where you can just train a model like this almost in your computer. And I think you will see these organizations building their own models and taking advantage of that, fine-tuning with their clinicians, doing also that reinforcement learning that you need to do on top with your clinicians to fine-tune the system for your reality. But when you are able to do that, and that part I think will be pretty fast with the pace that you are seeing this evolving, when you do that, you will have something, an entity, an intelligence layer in your organization that is able to help any physician perform at the top of their license. And that's hugely valuable. And I would think, correct me if I'm wrong, but independent of LLMs and GPT-3s and 4s that things like readmission rates, organizations have data on that. They can apply machine intelligence and predictive analytics. And I've probably been doing that for quite some time. And I presume that's best practice in certain hospitals anyway. Is that fair? I honestly don't think that's fair. I think there are some hospitals that are more advanced and what do you have that those insights running? But what I know from most hospitals globally, they are still lagging behind. There are some indicators that they followed. They might have a few bunch of models that they run, but I think the game here is different. You need to have a new strategy for data and for AI that starts pulling up that data from those silos, bringing that data intelligence layer ready. And then you just plug these models on top of it and you are able to achieve much more when you do that. I've done that throughout my career, where I led 10 years ago and almost also for 10 years, one of the most recognized big data and AI projects globally in a public hospital in Portugal, but where we invested upfront for several years, building that layer and then started building the intelligence on top. And what we were able to achieve with that was for instance, having agents that would crawl all that data in an automated way and would figure out, for instance, that some patient in the ninth floor of the hospital was with his heart rate going up and blood pressure going down, which is a sign of a more dynamic instability of the body. And those same agents would crawl all those other systems finding out in an automated way, for instance, that that same patient at that same time had a life-threatening potential level. And so when you cross all these signs, you can inform and we were sending text messages to the doctors in an automated fashion, you can inform them proactively of the problems. And so this vision of computational care is something that takes time. It's not just I'm building some AI model to find out my readmission rate and all of that. No, you need to think this for the whole organization and now you will speed up care and now you will bring up problems much faster to clinicians, but also to nurses because what I learned in healthcare is that if you are able to reduce the time where the moment or the moment when the information is available in any system until the moment a clinician knows about it, if you are able to shorten this, you can bring huge positive outcomes to the patients. Yeah, that's a key optimization metric. I'm sort of half-kidding, but doctors writing, make my handwriting look great actually, but can AI read doctor scribbles? I'm sort of half-kidding, I'm sure a lot of it's now done with keyboards, but I wanted to come back to a comment that you made earlier that we shouldn't think about machines making diagnoses instead of doctors. I want to come back to that because like self-driving cars, even if self-driving cars are more reliable than humans, which they're really not yet, I think we're probably a decade away from that, but assuming they are, a misdiagnosis of errors made by the AI could be seen as more onerous than human error, but you were intimating before that that is not the right way to think about AI. So how should the healthcare industry think about that balance between human expertise and AI-driven insights? Well, the first thing is don't try to diagnose. I don't think that's the right path. There are so many issues that you can improve before that happens. We are talking about almost every noon-day tasks that doctors do nowadays, and I've led some projects trying to automate that with NLP, which was what existed at the time, which it's the same reasoning that we have now with these language models. For instance, think about what my team have also led in the past, like summarizing an in-patients episode, like 30 days, a COPD patient, which is a patient with has a lot of comorbidities, stays a lot of time in the hospital, goes frequently to the hospital. Just think about all those days that patient is in the in-patient area, and at the end of that episode, the doctor needs to write the discharge notes. And just think about that in the future, that you can just click a button. The LLM will summarize that episode for him in a flash of a second, and he will just thumbs up or thumbs down. Let's check. There's something that it didn't capture some way, but it's not only this. Think about that after that, the patient goes home. When the patient goes home, the LLM can send him a discharge note that's personalized to him, getting all that jargon, that clinical jargon that's complex for a patient to understand, summarizing what happens inside the hospital, but in a different language, and also producing another version for his referencing physician. So just imagine all this flow running a lot faster, a lot more automated. Obviously, doctors are in the middle. This is a co-pilot. This is not the pilot, but doctors will be much more productive. We'll be able to see much more patients when they use these tools, because I've seen those tools working, and I know how powerful they are. You know, you're bringing up a really great point, and I'm thinking about it. We can all think about our personal experiences. I remember I had a blood test this summer, and I got the results. It came in about literally 30 different files that I had to open each one separately and look at the results. It was crazy. I said, well, if I'm in trouble, the doctor will call me. I gave it to my wife. I said, yeah, you interpret this, and she's so kind. She went through and did all the analysis and said, you know, you're okay. You know, maybe check this out a little bit. But I mean, having an LLM, just feeding that data in and say, okay, tell me what I need to know. Saves the doctor time, saves the patient time, and it doesn't converse in all this gobbledygook. But I want to talk about data privacy. It's always a major concern in healthcare. How can we ensure that AI, which has this hunger for more data, doesn't compromise patient confidentiality and trust? Well, that's a hard question. I think if you go through that role that I was telling about, where you try to build your own LLM, you have a lot more control than if you outsource this. Of course I can, I know that if you outsource this, you know, just imagine that you have all your data in Azure. It's very easy to plug into open AI models. And that's quite productive. But for some sensitive use cases, those guardrails need to be done, you know, with an ethics commission that sits inside the hospital. Once again, these things are built as a team. They are not built outside in, selling you a product and plugging to your system. And so if you have this multidisciplinary team, where you have, you know, the AI slash software guys, the clinicians, the ethics committee, and you try to bring them all into the game, it's much easier to bring those guardrails into the system. There are a lot of technical things that we could discuss. You know, you could have several LLMs talking to each other. You can have, you know, an ethics LLM that only worries about ethics of what the other LLM is doing. And I think we'll see that happen. But once again, we need to start from scratch and from scratch is the framework, the people framework you have to build these things. And I think that it should be done from the inside. And that's our model of sort of the long tail of specialized LLMs and domain-specific LLMs. But speaking of the people, I want to ask you, Jose, how do you see the role of medical professionals evolving in an era where AI takes on a much more significant role in healthcare and patient planning and communications? I think they will be much more happier than they are today, honestly. Because there are a lot of studies that show you that for each hour of patient care that they have with the patients, they lose two hours doing those mundane tasks like writing in the computer, sometimes taking work home because they want to see more patients. That's something I learned working 10 years inside the hospital was that they have this mission inside them, they want to treat more patients, they want to treat them effectively. And I think that if you have something that treats, that takes care of everything that is not what you study for, what they study for is to save lives at the end of the day. And along the way, electronic health records came along, they helped, but they also introduced a new layer that makes them lose time. If you bring something into the game that saves them time, and you can think about, I think that in the near future, I don't know if it will take five to 10 years, but the same way we have seen windows and office and the change that's brought into the system, I think we will see a new operating system in healthcare where doctors are with patients without any PC in the room. There's something listening to that conversation, it's going back and forth, it's going to see the blood sampling of that patient, is going to see and bring the image of that patient, but the doctor is not going to write everything, it will just spend time with the patient. I think that will happen sooner or later, I don't know, five, 10 years, it will happen for sure. Okay, well, that brings me, Jose, to my last question, which is the big one, was what about cost? I mean, a lot of people are predicting a massive productivity boom as a result of AI. Will that translate into lower healthcare costs in your opinion? For sure, for sure, I have no doubt about that. Just imagine that, because we talk a lot about these large healthcare organizations that for instance exist in the US and in Europe and all that, but what about all the other part of the world, like rural Africa, rural India, where you have farmers that are like 100 miles away from a hospital. I know, for instance, in South Africa, because of a work I've done within SEAD that I'm following with them, there are a bunch of clinics, which are called Unjani clinics that are run by nurses in the middle of Africa and they are alone, those nurses, they even prescribe medication to patients. And so you can imagine that those nurses will have their personal assistants, like a specialized doctor that is sitting nearby and that is helping to provide much better care because if you look at this trend, it is highly democratized. It is very easy to make this available globally, even today. What exists today can already improve healthcare and can already lower costs of accessing healthcare. And at the same time, just to finish, you can also think about those general practitioners that sometimes are in front of a patient, they need to have a specialized insight from a specialist in terms of the condition of that patient. And I think that sooner or later, you will see that specialist going through also the LLM that specialized, for instance, in optomology or general surgery or any other thing. And that LLM will also help the general practitioner have some insights that by today, you would need to call a specialist if the specialist is available. And obviously the costs go up when you have this kind of mindset, but you cannot scale to every patient with every doctor. And so you need new approaches that are much more intelligent. I look forward to that day coming sooner rather than later. Jose, thank you. Really appreciate your time. Okay. Another pleasure to be here. Thank you. It was great to have you. We'd love to have you back. Really interesting conversation. And thank you for watching SuperCloud 4, live and on demand from our Palo Alto studios, John Furrier. Rob Streche and I will be right back, right after this short break.