 Okay, we're back. This is Dave Vellante, I'm with Wikibon and with my co-host Jeff Kelly. This is theCUBE, Silicon Angles, wall-to-wall coverage of the MIT Information Quality Symposium. Dr. David Levine is here. He's the Vice President of Informatics and Medical Director of Comparative Data and Informatics at UHC. We're going to unpack the data quality, information quality issue within healthcare. David, welcome to theCUBE. Thank you. So, talk a little bit about your role at UHC. We'll start there and then we'll dig into it. Well, as the Vice President of Informatics and the Medical Director, I oversee our risk adjustment process. We risk adjust inpatient mortality, length of stay and direct cost for our academic medical centers. Since it's a very specialized patient population, it warrants the ability to really look at that population and capture those risks where the clinicians will both believe in the data and be able to take action on the data. And then additionally really my overarching role other than the risk adjustments is really in clinician engagement in the data. So it's both creating new tools for them to utilize the data and really working with individual Chief Medical Officers, Chief Information Medical Officers with their data so that it's more usable so that they can drive performance improvement. Okay, and that's measured as saving lives and doing so in an efficient, cost efficient way. Right, saving lives, decreasing complications in the most efficient cost efficient way. Okay, so talk a little bit about what you're doing here and your role at this conference. So at this conference I was invited to speak on the evolution of the UHC risk models. And what I mean by that is that there's a number of different risk adjustment methodologies out there, but UHC's risk adjustment by being specialized for the most complex patients in the healthcare setting have really continued to evolve as health data has evolved over the years. So we had very limited data back 10, 15 years ago when the risk models really started springing up. And now with all the big data, all the data that's available, really taking the conference attendees through UHC's journey and how we've worked with academic medical centers to continue to improve the quality of our risk prediction and the usability of the models, both on a data quality and then on a face validity and the user's ability to believe in the data and to be able to take action on the data. So I would imagine as a practitioner of risk assessment over the, let's say let's take the last decade before the big data explosion. I mean, there's always been big data. We talked about that a lot, but I would imagine you were making continuous improvement. Maybe not, but I'd like to either talk about that. What kind of trend lines you saw and then how it was affected by the volume of data. I mean, particular the events of the early web days when Google essentially exploded our information, our corpus of data that we have to deal with. Were you making consistent progress and did the volume of data affect that or were you able to plow through that? The hospitals have been making consistent progress even with the more limited data in the past, but what was really tough is it was really hard to measure and there were only very small measures that you could do such as inpatient mortality with something that we could always capture. Like the stay we could capture, even cost data was really hard to capture the individual data elements and actually even within those pieces to be able to risk adjust and predict and to modify behavior based on acuity of patients. When initially we might have only got nine diagnoses as a maximum allowed for the sickest patients and now in our database we can take in 99 diagnoses and 99 procedures and be able to work with that within the database to do better prediction. The analytics around that action ability has really increased. Okay, so you talked about the cost elements were hard to predict. Is that because they were so distributed in nature or to get a true picture of the total cost? Yeah, cost has still been the true cost has been the toughest nut to crack and I think that medicine has been slower behind other industries in having databases that are able to talk with each other and to be able to share that data. Hospitals are looking at their bottom lines being able to tie individual things to an individual patient have been a tough thing for our members. So if I had a knee replacement knowing exactly what implant I put in my knee on what day, the hospitals know they use that implant and how many they bought and what they bought it at but they don't know that the implant that might be in my knee was the one that they bought at X number of dollars. I think the ability of data expansion is not, we're not there yet but I think now the ability to get it has increased significantly. I think the challenge is the data explosion has come where our members have put in and all hospitals have put in a number of different information systems to solve the clinical day to day. Those systems still don't talk to each other in a consistent fashion always and still to bring the data elements together. So what's sitting in my OR operating system that may have material management data doesn't always talk with my clinical data repository and I think that's still the challenge around cost that we've got to do. And Jeff, you and I have talked about this. I don't think it's unique to healthcare but I think it's a bigger issue because of what's at stake. Well, absolutely and perhaps not unique but can you talk a little bit about why that is in the healthcare space that systems don't tend to talk to each other. You were a practicing physician before coming to UHC so talk about your experience in the field and maybe the frustrations you might have had in terms of working with disparate data sources that just couldn't communicate and kind of the ecosystem, how did that evolve? Yeah, it definitely is practicing and I practiced in one of the busiest emergency departments in the United States in Chicago and patients came in with limited information that they knew about oneself. The toughest thing, initially, before the data explosion and electronic health records is you had to rely on paper records which especially in an emergency and a trauma and a fast situation and emergency department. If they were even able to find the records and medical records to be able to get it up to me in a timely manner for me to leap through and find was obviously a huge challenge. Early on, what we actually started doing is scanning old emergency department charts and at least having an electronic poor person's version of the electronic health record. Even that was like reading handwritten charts and notes and finding things and that was only a piece of someone's data from their emergency department. I didn't know what happened to them inpatient during usual stays. What's happened in a good way, people want data and we want to capture all these data elements. I think in the health profession, a number of very good vendors have sprung up and provided electronic health record solutions or other solutions within other specialty areas but it's still been very proprietary and so there's not an incentive for the vendors' products at one place to work and talk with different vendors' products at another place. Health information exchange will help drive the exchange of some of this information going forward but not all of it but then I think the other tough thing in medicine and it has some similar in some industries but not as much is every place has their way of doing things so I wouldn't customize my electronic health record in the emergency department in a certain way but my colleague at another emergency department that sees a bit of a different patient mix in their emergency department may want it in a different way and then how much do I make my system talk for my specialty versus being uniform even across my institution and so it's really that balance of customization versus standardization that I think medicine became great guns and got really excited about the electronic revolution and actually didn't take a step and now it's beginning to take a step back and saying oh, how do we normalize this and that's why you see the springing up of all these individual data warehouses the real true need for health information exchange is a sort of the correct that explosion. Yeah and how do you actually adopt that into the organization. One of the things that Jeff and I when we were prepping last night we noticed this is in addition within UHC and this had a direct effect on mortality models all candidate variables are now required to be present on admission whereas previously it just had to be present some time on admission. So how did that come about? How automated is that? How much friction does that cause organizationally? When we first brought on a present on admission it was about three years ago, three or four years ago it was controversial not controversial in the concept but really in getting the providers to believe that that data was being captured properly in the administrative claims data that was being sent to us. So as an example, if they didn't write some past medical condition in a certain way in a certain place in the medical record the coder could not pick it up. So even if the coder reading the chart knew that that probably existed prior to admission they weren't allowed to code it. So there's a lot of buy-in early on and also a lot of catching up with best practices because thankfully present on admission when it really took hold and we were able to use it is when it got tied to reimbursement as well as quality measures. And unfortunately money talks. And our hospitals are incredibly still interested in quality but really the driver was when your reimbursement was affected. And so those present on admissions were up your reimbursement for certain conditions as well as helping your clinical measures that helped. But actually we had to wait the present on admission had been around for a, we had to wait for it to be around and stable for two years before we could have adopted into our data sets as I using it as a standard measure because we wanted our members to be using it on an equal playing field that it really could have an apples to apples comparison and that we knew that was being used reliably and properly. Another example that we just recently added this year was actually DNR that do not resuscitate order for certain models. Obviously hospitals have been writing DNR for a long time for the appropriate patients but the actual ICD-9 code was only adopted in the recent past. And then again we used two years of historic data to build our models. Our data that we're running is up-to-date data but it's based on two years of historic data. We needed the historic data to catch up so that we had the proper model set to really know how that affected the prevalence of our mortality and length of stay in costs. Interesting, so one of your other responsibilities you mentioned is kind of to help you develop the tools that physicians will actually use and nurses and clinicians actually use to actually interact with their data and analyze data and things like that. Talk a little bit about that because one of the challenges we've kind of heard anecdotally in the healthcare field is getting, is adoption of the tools by the physicians themselves. They've got certain ways of doing things and potentially are less likely to adopt these kind of tools and maybe other types of workers. What are you doing in terms of developing tools that will entice physicians to actually use them and what are some of the challenges and how are you overcoming those? Sure, I think two key premises that we've been working on at UHC and that have been sort of consistent with our overall philosophy is our database has always been open and transparent so that you can actually look at your hospital's data by name and look at our other members' data in comparison by name. So in a competitive market here like in Boston, if you're at Brigham and Women's, you can actually look at mass generals or Tufts data by name. If you thought your competitors or someone that you wanted to compare to was more like on one of the West Coast instead, you could do that or you could see the whole database. And so we're leveraging that transparency. So in all our models and all our scoring of what gives you a red flag or a red dot and whatnot is all out there and transparent to the membership. The other thing is member engagement. So my risk adjustment team in building the models involves clinicians and statisticians from the membership. So it's the transparency and actually the, it's not just me and my team in a box at UHC. I think the other thing is key is that we really are most of our people that work at UHC, our clinicians, statisticians, people that have worked in the hospital. I still see patients not at Cook County anymore, but still practice clinically. So when people talk to me about their experiences, I'm living those experiences as well as working on the quality piece of things. So that helps as well. And then I think the last thing is we don't just give a number. So your mortality data isn't just this or your length of stay isn't this. Every one of our data elements has at least one drill down and most data elements were possible. You can drill down to the patient level. So very commonly when working with a member and they're looking at some individual clinician's data, he or she can actually get drilled down to those cases. And sometimes there are things that weren't recorded properly or the attribution wasn't right. When they look it's like, that wasn't my case. I don't know how that name got assigned to me and pulled in and we're flexible with that, with the transparency. So I think those are real keys that help is really knowing where the data is coming from, what the source is. We have never professed that our data is perfect. We try to make it the best that it can but we're also very open about where it still needs to go. And we're up front with our membership of where our strategy is. So right now we're mainly administrative claims data. We know that the docs and myself as a user have wanted clinical data. So we are actually looking at ways now to automate it, bringing in lab values and EHR values beyond the coded administrative claims data. Statistically, it's only gonna improve the models probably about 5%. Face validity and clinicians really buying in and using it will be huge. So now they can really feel that person with that low hemoglobin was really recorded as being anemic versus if they circle the lab value with an arrow down, they're right. The coder would not be allowed to pick that up. They'd have to acknowledge it in more form than a symbol on a chart. So I think that's helpful. Very excellent. Now, do you guys have a so-called CDO? Are you the equivalent of a Chief Data Officer? Or how about that role within the organization? Yeah, Steve Muir, our Senior Vice President of Comparative Data and Informatics would probably serve as the closest as our Chief Data Officer. He's more on the strategic end. I'm more on the operations end with that. We do have a whole data team that's then headed up under Steve and myself that really focus on the data integrity type piece of thing. So we do have a structure that's fairly similar to our members. Is that role, how is it evolving? Is it relatively new and where do you see it going? I think the role has been fairly stable other than in dealing with the multiple data sources. I think our biggest challenge, our databases have grown up just like the hospitals. So we have a database for the hospitals. We have a database for the faculty practice or the physician professional fee data. Until recently, those two databases were not able to talk with each other and contractually were even contracted by separate entities even within some of our same members. So I think the role of the Chief Data Officer is really the cleanliness of the data, the data integrity. We've really set up some really good solid processes that's now more maintenance, now more of the challenges. How do we bring data from different data streams together and keep that same high quality together and linking of databases? Yeah, and that's a challenge we've talked about earlier today about when you start bringing together different sources of data that kind of, it can change the equation and evolving your data quality programs to adjust to that can be a challenge, but it's critical. Yeah, absolutely. So I'm curious, one of the evolving technology areas if you will that we've been covering is the so-called industrial internet. And around the idea of bringing data from industrial equipment and that includes medical equipment into the equation when you're talking about analytics and in this case a healthcare scenario. So bringing in data from your MRI machines, from your X-ray machines and other types of equipment. What are your thoughts on that? Is that, I mean it sounds like we're certainly in very early days, but increasingly these machines are creating more and more data. Some of them have sensors on them, things like that. Potentially even patients themselves are creating data there. They've got sensors on them. I'm just curious, again as a practicing physician and you're a physician at UAC, how do you see that evolving and does that offer, I guess probably both potential benefits, but also complications as well? Yeah, it's absolutely a double edged sword. Some of the data that you can get from that is phenomenal. So even in the past when you had to rely on a nurse usually or a tech, writing down a blood pressure and even if they were entering into an electronic health record, transposing a number could be huge. A machine automating that number in, it is what the machine recorded. That being said, a lot of these automated machines that will generate data points like every minute, every five minutes, every 10 minutes, give you tons of information and the question is sometimes it's information overload. So if I had one abnormal blood pressure because I moved at that point in time but everything else was normal but that's sending an alert and sending the code team down to my bedside and I'm totally fine, that's problematic and it actually causes a lot of false alarms which then cause fatigue and then when the real deal happens, people don't come. The other piece to that is, I think that data and the ability to have that data and download is great but someone still a human needs to be looking at and validating, do those numbers make sense? Because a lot of times I would see even, we had some of that early automation that dumped some of the vital signs in and it was still within thresholds but totally did not make sense and so it has to be a way to do some data cleanup. I think the opportunity for the home monitoring to be able to be sent to the doctor's office to see someone being compliant, again, I think is, or see how the medication's working, I think is great. What I, the challenge will be, can I get a, if I have that diabetic out there, I don't want or need or have time to look through 300 blood sugar sticks over a course of a set time period. If it's able to then take that data and send me a trend that the person was compliant 99% of the time within the range that I set and I'm able to take that and digest it, I think the opportunity is great to manage more people aggressively outpatient and keeping them out of the hospital and have better healthcare outcomes. So I think, you're right, it's at the very early stages. I think there's opportunity, I think we have to be very leery of alert fatigue and garbage in, garbage out and just not archive and keep every piece of data just because we have it. Very interesting. Excellent, all right. Listen, David Levine, thanks very much for coming on theCUBE and sharing your experiences and good luck going forward. Thank you, I've enjoyed being with you guys. My pleasure. All right, keep it right there, we'll be right back with our next guest. This is theCUBE and SiliconANGLE's coverage of the MIT Information Quality Symposium. I'm Dave Vellante with Jeff Kelly, we'll be right back.