 It's a pleasure to be here. Can you hear me? Is this my con? Okay, great. So I'm Ken. I'm from Utah, and I'm going to be providing some external feedback, thoughts on recommendations for electronic phenotyping. Just as some disclosures, in the past year I've been a consultant or a sponsored researcher on clinical distance support for Office of National Coordinator for Health IT, primarily also some for Hitachi and McKesson-Inderqual. So just because I don't think I know many folks here, just as some background, so on one hand I'm primarily operational. I'm actually currently allocated about 80% effort to operational duties. I'm Associate Chief Medical Information Officer at our health system. I oversee our operational clinical distance support work, so things like the epic best practice advisories, health maintenance, the medication alerts, that kind of thing. I'm the person who oversees just the regular epic things around that. I've also done a decent amount of work in quality measurement, doing things like building quality measures used for physician compensation in our community clinics, things like that. And I'm spending a lot of my effort now on interoperable applications and services that we build on top of our epic EHR, things like smart on fire applications, things like external distance support that we're integrating. So that's one hat. And then on the other hand, I've been engaged a lot in standards development and implementation. I've been Co-Chair of the HL7 Clinical Distance Support Work Group for a number of years. I founded with NHRA support, actually, through a K-Award, OpenCDS a long number of years ago, and now we use that across our organization, number of companies, et cetera, for providing clinical distance support that's standards-based, that's all open source. I also coordinated efforts from the Office of National Coordinator for Healthy Decisions and Clinical Quality Framework, which were initiatives around developing and validating standards for quality measurement and distance support. I'm also a member of the Up and Coming Health IT Advisory Committee that's just getting set up. So with just that background, I'm going to go right into some of these questions. So the first one is, how can eMERGE improve upon the current labor-intensive phenotyping towards more fully automated phenotyping methods to increase phenotyping efficiency and validity using EMRs? And I'll just note here, George talked a lot about, you know, or some about the differences between what the purpose is, right? So for research cohort purposes, you really want high positive predictive value. You don't care if you miss a bunch of patients. For distance support kind of purposes, you care about collecting as many patients because a doc doesn't like it when you miss 40% of their patients, when you're taking care of a patient and saying, oh, other 60% will recommend for you. I think along the lines of the clinical angle, too, there's also differences in those approaches in terms of how scalable you want it to be. So if it's meant for primarily research by eMERGE sites, then I think it's totally fine for it to be a highly resource-intensive effort that costs millions of dollars of research funding to get accomplished. I think that's appropriate. If you want to get towards how do we scale this to be able to be used at health systems, academic, community hospitals with relatively minimal resources and a lot of competing demands, then you have to think, how can I scale this without all those kinds of resource needs? So with that in mind, I'd say my biggest big picture recommendation is pretty straightforward. It's to learn from and synergize with related efforts. So I think this is kind of a straightforward recommendation, but it's actually really hard to execute on. It's just not something we typically do very well. So the underlying this is first understanding that electronic phenotype is really a common problem encountered in other areas, certainly beyond genomics. So if you think about most health IT, operational informatics kind of things we do research, a lot of it just comes down to how do you characterize, you know, you go from data to information. So this is certainly beyond genomics, beyond eMERGE. This goes all over the place, right? So there's a lot of efforts, obviously. There's efforts in data standardization, clinical distance sport, electronic clinical quality measurement, and I think what this means is if you synergize, you will have more resources because there are more people trying to solve the problem. And I think importantly you could have more adoption. I think one of the challenges I've encountered in my career working on standards and say the distance sport space is when you create standards just for your area, then people, the EHR vendors don't adopt it as much, right? I think the key question here is can you come up with standards that the EHR vendors will natively adopt that you don't have to wrangle, fight with to make it happen. I think I looked at the roster list and I didn't actually see any EHR vendors coming to this conference and I think that's something to think about, right? Like in the end, most of us use commercial EHR systems. We need what we do to be supported in those commercial EHR systems. And I think going with an approach that those vendors are looking to support is really, really important because otherwise there's a high chance when they adopt something you'll have to rework everything to meet their approach. So just getting into some specifics around that notion of working with others and using what I think are important trends emerging. One is I'd recommend using the HL7 clinical quality language or CQL. So this was developed by the own CCMS CQF initiative and the reason why I really recommend this, I mean there's a lot of reasons why you'd recommend certain standards because it's better or it's more semantically pure, that kind of thing. What I learned is it's really important to understand where this trend of the industry is going and I think the trend of the industry is going this way. I think primarily this is because CMS is announced that it'll start using CQL for its electronic clinical quality measures. Which means essentially if you want to get paid by CMS, you'll need to support the standard. I think it's the way it's going to happen. So I think EHR vendor support is likely to start emerging and I think aligning with that is a really good idea. So I won't go into detail but this just shows what CQL looks like in the human readable form. This is for quality measurement for diabetes food exams. Obviously it can do much more complex things. Just showing it's like any other standard. We've had numbers of standards. I think a lot of times the gist of the standard isn't necessarily, is it better than another? It's whether vendors are going to adopt them or not. And I think this probably has the best chance right now of being adopted. Along those lines, I would recommend building on the FHIR-USCORE profile. So this is where EHR vendors are focusing in terms of their support for FHIR APIs, which is their main approach they're using now for exposing their data. So anyone who's worked with this knows it's far from perfect, right? So like medications, they'll encode the medication codes using ARC's norm codes. But there's nothing in the US core FHIR profile that says the route needs to be an FDA code. So we end up pulling it from our Epic EHR and we get institution-specific route codes that we have to map. There's things like that that has to be addressed. But I think this offers a pretty good baseline of fairly wide support. And my recommendation to this group would be, don't ignore this, engage directly with it for things you want, try to get it worked on here, and work with the vendors to get it supported. Essentially what you need, the process ends up being, convince the vendors in the community and the healthcare systems that this is a really important use case that must be supported, and then work with the vendors to make sure it gets supported. And this just shows what that looks like on the FHIR website. Really not rocket science, but it sets things like, okay, the status of the condition needs to be from this value set, that kind of thing. Things we do everywhere. Every single group that's going to work on something like this is going to define this, you can make arguments of is this the right approach, is that the right approach? But I think the key question is, is your vendor going to support it? Because if it doesn't, then you're going to have to do the mapping. Here I just call out that there's really important work going on here for two initiatives, both led by Stan Huff, CMIO at Intermountain. One is the HL7 clinical information modeling initiative, or CIMI. Where the insight here is that if you want true interoperability, you really need detailed clinical models. It's insufficient to say you'll have an observation with the code in a value. It just, anyone who's tried to do this realizes that just doesn't work sufficiently. So what this effort is trying to do is to get from a position where we have standards where every time you try to implement, you make judgment calls. You say, should I implement using this structure, that structure, is it a procedure, is it an observation? What should I use for this code? What should I use for the value? What should I do for the units? Should I add a body site? This is all sorts of things. Anybody who's implemented this knows, this is what you do. It works for pilots. It works for highly resource settings where you share that kind of information. It does not scale. I think this has been the key to seal all the way back to the art in syntax days when this part was left out in the curly braces. We keep kicking the can on this one. I think if we don't solve this, everything is going to be built on a house of cards. So I think this is super important. And this effort includes tooling to generate and leverage fire profiles. And I do think if phenotypic efforts, at least for scaling, will likely fail without this error. So my recommendation is work closely with Stan, work on Simi. Another relevant effort is the Healthcare Services Platform Consortium, or HSBC. This effort includes efforts to build and share inoperable fire interfaces prior to Native Epic or other EHR vendor support. This is to say, we don't need to wait for the vendor to support what we need. We can build it and we can share it. And HSBC is engaged in that effort. And I think that's also really important because we don't, you know, for your use cases or our use cases, we don't want to wait three years until the vendor supports it. We can support it now and share it with each other. And this just shows, for example, this production smart on fire app that we have for ability management. The only thing I'll note here is that a number of these items here that you see that are supported, we have to create custom interfaces for. So for example, the phototherapy that's in the nursing documentation that's documented and shown here, that's a custom interface. Here we have the mother's labs, the blood type and indirect rooms. That's a custom interface we built on in Epic so that we can say, who is the mother of this patient? Now we're going to make a fire API call to that mother's record and pull on data. And we're going to be sharing all this kind of stuff to other Epic customers free and open source so that we can build. And I think this is the kind of thing the community can do to start saying, hey, how do we build on what the EHR, where the EHR is heading? And still support our needs. I'll briefly go through the other ones. I have much less to say about these others. So in terms of machine learning and other computational techniques. So at least for scaling, I'd recommend focusing on basic approaches that are easiest to scale, such as role-based processing of structured data. Again, not as relevant if you're willing to do this in a highly resource environment, but in a lower resource environment. I think that's definitely the easiest. I think NLP, machine learning, that kind of thing is certainly useful. Well, we use it for a variety of use cases in our institution as well. But it's definitely a higher bar and more resource intensive. So I think it should be judiciously used. And I'd certainly synergize with other related efforts. I'm just mentioning here, NCIU24 grant, where we're doing these kind of things, population management, NLP, all the asking patients questions, et cetera, and doing it on a standard space staff that will open source. And I think a lot of folks are doing similar kinds of things. I think we just don't generally collaborate, because it's not a natural thing for us to do. It takes more resources. And it's often not in the quote unquote scope of work that we're tasked to do. Last question on how can eMERGE assess immunotype comparability across diverse patient populations and diverse health care settings? Some recommendations, I do think, and I agree, when you try to do this kind of work, machine learning, NLP. The thing that always gets you is the amount of effort it takes to create the gold standard. So typical operational conversation goes like, hey, Ken, I had this conversation last week. Can you scale your no-show prediction model across our institution? And can you get it done by next week? So that's kind of the time scale a lot of the industry moves. And when you say, well, it's going to take us 12 months to do the chart audits with physicians. They just say, OK, forget it. We're just going to do what we do. So I think establishing gold standard phenotypes in less costly approaches is probably the critical issue. Once you create a data set, you can hand it off to any competent NLP person, machine learning person, and they can run their magic. The key is having that data set. So one thought here is you could leverage where significant effort has gone into establishing gold standards like the American College of Surgeons and SQUIPs registry. There's also, we make fun of billing data, but there are certain kinds of billing data that institutions spend a lot of effort to make sure they get right. Because if they don't get it right, they'll get a compliance issue. And if they do get it right, for example, you can get $4,000 additional on an inpatient admission, for example. So I'd say look for areas where people are spending effort to do gold standard chart audits, recognizing that in a lot of institutions, there's dozens of people running around manually chart auditing things. And the question is, well, why not leverage it if it matches your use cases? George already touched on this, but manual phenotyping efficiencies I think is important. I'll just note that ourselves included have tried to figure this out because we needed to figure out a more efficient way to do this in low resource settings and found things like, you know what? It actually makes sense in certain circumstances to provide the results of the electronic phenotyping to the reviewers so that they can be quicker about finding potential errors and such. And I think this is, again, a critical piece. The manual phenotyping is very expensive and figuring out how to make it cheaper, I think, is really important. I think another important part is leveraging, increasing EHR consolidation. So I mean, I think it's well known. Like, if you count the EHR vendors, that like 90% plus of us in this room are using, it's probably one or two vendors is my guess. So I think just leverage that. So when I take things like queries we've written on our Epic Clarity database and share it with other institutions to try to collaborate, more likely than not, it works. It just works and they just get the results back. And I think this is something that's a good thing. I think we should potentially focus on places where there's major EHR vendors and just come up with optimized ways of supporting those technologies. Because I think it could be a shortcut compared to the amount of mapping you might require in certain use cases. So just to summarize, and I think I'll be about 45 seconds over. I think the biggest one, Barnan, is learn from and synergize with related efforts, acknowledging that it sounds nice, but we almost never do it. There's almost no funding ever around that says, hey, we want you to work with other people who are working on similar things that's not in your network. It just almost never happens. I've seen it over and over again, people. It's just a hard thing to do. And then the rest are what I discussed. I think the key thing keeps coming down to try to figure out how to collaborate with other people that you don't typically collaborate with. Think of things like how do you go to HL7 and collaborate? A lot of the people in this room, I've never seen at HL7, for example. There are some folks from Emerge I know who go there, but it's the kind of thing that a certain number go. And I'm not saying that nobody goes, but I think it's the kind of place where it takes effort, it takes resources, but collaborating with others, working on similar things is really, really important. Thank you.