 Hey, everyone. Welcome to Grand Rounds today. I'm excited to have you all here. Can we, should we recognize the visiting medical student? Wait, who, I got like glare here. Who's the, hello, Nicholas Tan? Where are you coming from, Nicholas? Okay, great. Happy to have you here. And I think we'll get a chance to talk to you throughout the day. So today we, for Grand Rounds, we happen to have a visiting professor today, Dr. Joshua Stein, coming from University of Michigan. So Dr. Stein is an expert in big data. And really like, if you look at like big data research for ophthalmology, so much of it has come through Dr. Stein and he's been so involved with it. So we're really, really fortunate to have you here. I did a research fellowship at Michigan and Dr. Stein was my mentor there. And he's just been super supportive and made a big difference for my career and just really my life and what I've been able to do. So super grateful for you, Josh, and also to have you here visiting. So we'll let you present. Good morning, everyone. And thank you so much for having me. One of the most gratifying things about being an academic ophthalmology is seeing your trainees, your residents and your fellows go on to do great things. And, you know, Brian is a great example of someone who I've been so fortunate to work with and see him now kind of blossom into a great clinician scientist. So thank you for having me. And today I'm gonna be talking about the future of big data and ophthalmology. These are my financial disclosures. Let's see if I can show up on the screen here. Right, but that's all right. I'll be talking about some research that has been funded by the National Alliance Institute, previous R01 and a current R01. The other disclosures are not relevant. So there are several key puzzle pieces that have nicely come together to enable researchers to take advantage of big data. The first key puzzle piece is the digitization of personal data, whether it's the transition from paper to electronic health records, the proliferation of all the smart technology that keeps track of the temperature we like to set our thermosets to, to the music we like to listen to, to apps that keep track of all of our likes and dislikes, to fit bits that keep track of our physical activity, to devices that can give exquisite views of the structures of the eye in great detail, and to the ability to sequence the entire genome and do it at an affordable price. There's now a plethora of information on all of us and our patients that are out there. The second key puzzle piece is advances in computing power. As you can see from this graphic here, computers have gotten powerful, more and more powerful over the years. In 2015, there was a computer that surpassed the brain power of a mouse. By, we're now in 2023, they estimate there's gonna be a computer that surpasses the brain power of a human. And in many of our lifetimes, there may be computers that are equivalent to all human brains combined, which is a little scary. The third key puzzle piece are advances in data storage capacity. With all this data that is being collected, we need to be able to store it. And back in the 1980s, it costs over $100,000 to store a gigabyte of data. Unfortunately, nowadays it just costs a few pennies. And the fourth key puzzle piece are lots of really smart people who know how to work with, manage and make use of big data and be able to apply it for all sorts of interesting applications, whether it's telemedicine, disease management, genetics, drug discovery, wearables, et cetera. But there are also some challenges of working with big data. The first is information overload. The amount of data generated in two days is as much as all data generated in all human history before the year 2003. And with so much data that's out there, we need to be able to figure out what's important and what's not, or to be able to distinguish the signal from the noise. Another issue is big data is messy. It's estimated that 80% of all world's healthcare data is unstructured. By unstructured, I mean it's captured in free text. We all see this in our clinic notes. There are these paragraphs of information. And it can be challenging to get the useful information out of these paragraphs. But there's also a lot of utility in this information. So just ignoring it is not great either. So it takes sophisticated tools like natural language processing and other techniques to be able to work with some of the data. And I'll talk a little bit more about that later. Other challenges include differences in the location where the data is and how it can be accessed, different files are in different formats, the complexity of the data, the structure of the data, and then all sorts of regulations and requirements that can make it challenging to share data with one another. For those of us in the audience who are clinicians, there are additional challenges. Number one, there's just not enough hours for us to process all this data. It's estimated that less than five hours a month are spent by physicians reading medical journals at the amount of time that they estimate required to keep up with all the patient care guidelines is 21.7 hours a day. And for those of us who value sleep, that's just not possible. There's over 30,000 clinical trials and 23 million articles on PubMed. It's just very difficult to stay on top of all the exciting work that's going on. Another challenge is you can create this really cool artificial intelligence, machine learning algorithm that solves blindness and does all these great things. But if it's sitting in a basement lab and not accessible to clinicians, it really isn't gonna help with patient care. So there's a big challenge of how do we not only take use of the big data, but get it to a point where it can help us with point of care decision making in our clinics, in our ORs. And then finally, we need to keep in mind that when we see patients in clinic, we're just capturing a small snapshot of what's going on with them. We're usually seeing them for 10, 15 minutes and there's just a subset of information about them that's captured in the electronic health record, but really the factors that are driving their health include genomics, lifestyle, environmental factors, all sorts of other factors besides what we're capturing just when they're seeing us. So the way I kind of see things, it's really like a universe of big data. And there's all these really cool important parts of data, but the challenge is they're all very siloed. And what we need to be able to do is link these silos, be able to find ways to take proteomic data and microbiome data and genetic genomic data and clinical data and imaging data and bring it all together. And then we can really learn a lot and be able to advance the field. So I'm gonna shift a little bit to focus on ophthalmology now. And rather than thinking of a universe of big data, we'll kind of go to a galaxy of ophthalmology data. And what I've been trying to do these past few years is see can we link clinical data from our electronic health records with other POTSA data, like lab data, data from our diagnostic test device, like visual fields, medication data, OCT data, claims data, radiology data. If we can link these POTSA data, that would certainly be a good start. So for the past several years, me and my team have been developing the site outcomes research collaborative or source. It's a consortium of academic ophthalmology departments, all of whom are on the Epic Electronic Health Record System and we're contributing all the patients with ocular diseases seen at our institutions. The sorts of data that we're capturing include patient demographics, all the structured data fields that you see on the screen when you're taking care of patients, visual acuity, eye pressures, cornea exams, billing and administrative data, all the ICD billing codes and CPT codes. At Michigan, we collect some patient report outcome data that's included radiology and pathology data, lab tests and results, and it's possible to grab data from operative reports and medications. And it took me and my team a while to come up with code to pull all this data from the back end of Epic. And at the end of that whole process, I said, it's great that we're able to pull all of our own data. Wouldn't it be even better if we could share our code with other folks at other sites that are also on Epic and see if they can pull their data in the same manner. And we've been sharing our code with colleagues across the country. And the idea is that their IT folks can take our code, pull their data in the same manner. We have software that removes all the protected health information so the data can be sent in, pulled, cleaned, harmonized, aggregated. We're able to link in ocular imaging and the ocular diagnostic test data like visual field data and OCT data. Hopefully in the future, we'll be able to integrate in genomic data. We haven't done that yet. And create this pool data set, this rich pool data set that could be used by researchers at any of the active sites for researcher quality improvement projects. So these are all the various academic centers throughout the country that are involved in source. It's a nice diverse group of institutions spanning across the country. As you can see under Brian's leadership here at University of Utah, I'm very pleased is actively involved in source. So some of the stuff that we're doing, you guys can certainly get involved with. It takes a jumping through a whole bunch of hoops to get all the data sharing paperwork, to be able to share your data as you can imagine. And at this point, we have over a dozen sites that are actively involved in source, meaning they've sent all their historical data and they're actively contributing data. The other sites are in the process of jumping through those hoops and pulling their data. Looking more, getting into a little more detail, we're able to look at all the different fields the I exam and parse out relevant pathology. So here you can see for the optic disc, we have this code that can search the different fields and identify all the various pathology and flag it. So it makes it easier for, if you're a researcher and you wanna do studies looking at patients with pits, we can find all those patients for you. Yeah, so I think it's possible as long as the stuff is being inputted into Epic, there are ways to capture the historical stuff. I think the challenges are, one needs to use some sort of natural language processing to parse out all the relevant information that can be doable. The other big challenges in those paragraphs of text, sometimes there's PHI that's in there. So sharing that data, if there are patient names and a bunch of free texts, all the data that's in source is totally de-identified. So that's where it gets a little challenging. But I think there are ways to overcome those challenges and it should be possible just like, if you have a Microsoft Word or Adobe, you can find terms, you can find surgery names and things and be able to parse that out. It's more doing it in a way that's safe and that doesn't compromise PHI. So in terms of surgery data, there's information on type of anesthesia, eye laterality, where the surgery took place, all the time stamps, when we do our surgeries, there's start time, end time and schedule. At least at Michigan, there's about seven or eight different time stamps throughout a given case. So one can really be able to delve deep into how long it's taking to do different parts of the case. There are preoperative diagnoses up to five CPT codes per surgery and all the surgical supplies, implants, intraoperative medications and data from operative reports. In terms of medications, all the standard stuff that we routinely document, it is possible to integrate metadata from diagnostic tests. This is all the numerical values that show up on the diagnostic tests. So you can see at this point, we have, I haven't updated this slide recently, but we have probably more than 3 million patients, over 15 million office visits for eye related problems, over 500,000 ocular surgeries, many millions of lab tests and medications and millions of OCT and hundreds of thousands of visual field. So the data is growing and as more sites join, these numbers will continue to rise. What can source be used for? Certainly clinical studies, outcomes research, integrating data into deep learning or machine learning algorithms. There's a submission to ARVO using source data that involves one of those applications. Hopefully in the future, if we can integrate genomic data in some sort of genotype phenotype association studies, quality improvement initiatives and potential for multi-center clinical trial recruitment. So I'm gonna now kind of give a few examples of studies involving source data. This is the focus of my current R01 and it's a topic called enhanced, what I call enhanced phenotype identification. So the way I see it is for researchers to really take advantage of big data, it's really important to identify and classify diseases or phenotypes of interest. And one would think that should be easy, but if you look at almost all the big data studies that are out there, whether it's with Medicare claims data, with iris registry data, we're all just relying exclusively on the billing codes to find patients with whatever the condition, keratoconus, glaucoma, macular degeneration. And as good as the billing codes can be, there are some issues with the codes. Number one, some conditions just don't have billing codes like nanophthalmos and even some maybe potentially useful conditions like vitreous loss after cataract surgery, there's no specific billing code for that. There's also issues with miscoding, up coding, down coding, insufficient coding that can all potentially mess things up when one tries to use the data for research purposes. And then prior to 2015, we're using ICD-9 codes at that time, there were about 14,000 codes. And in 2016, we transitioned to ICD-10 where the number of codes ballooned up to nearly 70,000 codes. So that's led to some issues. I kid you not, if you look in the code book, these are some actual codes. And I don't know here in Utah, but we just don't have any patients in Michigan that have been struck by turtles or sucked into jet engines. So clearly some of the folks who are creating these codes are probably not clinicians and you have to be careful and there can be some garbage in garbage out. Even if we look at a more common condition like diabetic retinopathy that we see every day, right now there are 18 separate codes for diabetic retinopathy. And I'm sure all the retina specialists in the audience can get the right code. But as a glaucoma specialist, if I'm seeing a patient and I'm not dilating them, my chance of getting the right code is probably one in 18. So while there are some advantages to ICD-10 codes over ICD-9 codes in terms of better capture of bilaterality, more granular capture of certain diseases assuming they're coded properly and some documentation of disease severity. Again, if it's coded properly, I believe for us to really make use of big data, we need to move beyond just relying on billing codes. So here's an approach that we developed to see if we can move beyond the billing codes, leveraging the electronic health record data. And I'm gonna use exfoliation syndrome. I'm a glaucoma specialist. So this is a condition that's near and dear to me. As you all know, patients with exfoliation syndrome have these classic clinical findings, peripupillary transillumination defects, this dandruff-like material on the lens capsule, spagodinesis, herododinesis. This is a study we published several years ago using Medicare claims data, looking at patients, associations with exfoliation syndrome. You can see in this study all we had were the billing codes and that's what we used. So now if we have all this electronic health record data, we can certainly continue to use the billing code data at University of Michigan. I'm guessing here you can't close your chart without selecting at least one billing code for every encounter. So there will continue to be the billing codes and I'm not saying we should ignore the billing codes. Beyond the billing codes, there's the patient's problem list and this is a running list of all the different medical problems that have been flagged for the patient while they've been in the health system. There's also the quote-unquote smart data. This is the boxes in the actual exam and here you can see it's possible for the clinicians to document evidence of this condition. And then finally, there's the unstructured or free text data in the HPI section, in the assessment and plan section and being able to locate patients with a condition using free text requires some sophisticated tools to locate and one can use tools like natural language processing where it can search for words or phrases of interest in the electronic health record. For exfoliation syndrome, we found 14 different ways that the various clinicians have mentioned or documented the condition. But for the NLP to be effective, it really needs to be sophisticated enough to not flag negative mentions like if it says no record of the condition or if it's exfoliation, but it really has nothing to do with the eyes like a burn patient or if it's family history where the patient themselves doesn't have the condition. So we put together this list of all the different ways we could look within the electronic health record to see does this patient really indeed have exfoliation syndrome and one can look at demographics, patients with exfoliation syndrome tend to have a certain or more likely to have a certain demographic profile. Certainly the more visits they come to see us, the more likely they'll get diagnosed with any of our conditions. So that's something we can consider. We can look at the information in the iris part of the exam, the lens part of the exam. We can look for Translumination Defects, Lens, Dislocation, Fecodinesis. We can look at the ICD-9 and ICD-10 billing codes, the problem list, and then the free text. And the question we had was which of these parameters or groups of parameters can best predict which patients really have the condition? So to develop a gold standard, I recruited four of my colleagues and we had them each review 50 charts. These were patients with exfoliation syndrome, some without exfoliation, and they had to grade each chart as definite evidence, possible evidence, or definitely no evidence of the condition. And then we created this lasso regression model where our dependent variable was definite evidence of the condition as judged by our glaucoma specialist experts, and we put all these independent variables into our model. And the starred items are the ones that were most predictive of which patients actually had exfoliation syndrome. You know, mentions in the iris exam, mentions in the lens exam, the trans-illumination defects, the problem list, and the free text. Interestingly, the billing codes were not one of the most important predictors in our model. Once we have a model, now we can apply that model to every patient in source. At that time, we only had 100,000. Now we have many more, and it assigns everyone a predicted probability from zero to 100% as to the one's likelihood of having the condition of interest. Not surprisingly, most patients had a score of less than 10%, but we could identify over 350 patients with scores over 90% and 83 patients with scores of 99%. We then validate our algorithm by having the glaucoma specialist each review 50 more charts, some of those with scores less than 20% and some with scores of greater than 90% to make sure that those patients were properly categorized. And we found our positive predictive value to be over 95% and our negative predictive value to be 100%. So at least with this particular condition, it seems like the algorithm is working well. We also found that when there's billing code documentation of exfoliation syndrome, about 90% of the time, it was visible in the free text or in the smart data, but the converse wasn't true when there's evidence in the free text or the smart data, only about 20 to 40% of the time did the clinician actually bill for that condition rather than just open angle glaucoma or something else. So one of the things we're doing now with my current grant is expanding this to other conditions besides exfoliation syndrome. And one of the applications of having this is if everyone has a score from zero to 100%, imagine how easy it will be to be able to identify patients for enrollment in clinical studies in clinical trials rather than asking our residents or our medical students to go through hundreds and thousands of records to find patients of interest. You know, if everyone has a score from zero to 100%, we can just say give me everyone with a score is greater than 80% or 85% and the more certain one is that a population has that condition, the stronger the power and the fewer patients that would need to be enrolled in the study. Another application, I think a bunch of many of us are familiar with GWAS studies where you have a target phenotype and then you look to see what chromosomes are associated with that phenotype as we get better at identifying phenotypes using approaches like this. We may in the future be able to do GWAS studies where you have a target genotype and be able to identify all the different phenotypes associated with it. Some of this work has been published in a JAM ophthalmology paper we published. Shifting gears, here's another example. This is work from my past R01 where we took data from source and we use a machine learning technique called Kalman filtering to develop personalized real-time forecasts of a patient's glaucoma trajectory and personalized menus of target intraocular pressures. So what is Kalman filtering? Kalman filtering is a forecasting and noise reduction technique that's useful for modeling complex large systems. It was actually the technique used by NASA in the 1960s when they flew Apollo up to the moon and it's routinely used in commercial aviation. More recently, researchers have been applying Kalman filtering to medical conditions and what it does is it combines a population-based understanding of disease evolution with individual patient characteristics to be able to forecast future values of key clinical parameters. So how do Kalman filters work? Say you've got a spaceship and you want to go from Earth up to the moon and you want to predict where that spaceship is going to be. If you want to predict the next point on the trajectory of where the spaceship will be, you can look at prior flights with similar spaceships. You could look at earlier coordinates from the same spaceship and the more past coordinates one has, the better one can predict where they'll be at the next point in time. You can look at characteristics of the spaceship, wind speed, measurement error. So applying this to glaucoma, if we have patients whose glaucoma at baseline has a certain mean deviation, pattern standard deviation, intraocular pressure and we want to know what those values will be say five years into the future. One, you know, past flights with similar spaceships would be glaucoma progression dynamics from similar patients. So one can look at patients from clinical trials like oats and aegis and sigits and some of the major clinical trials that are out there to learn how our patients are behaving. Earlier coordinates from the same spaceship are past values of intraocular pressure or mean deviation, pattern standard deviation. Again, the more past observations, the better you can predict how that patient is going to, where they'll be in the future. The type of spaceship is like the characteristics of the patient, their age, sex, genetic predisposition. Wind speed is like patient adherence. Usually we're going against the wind every once in a while. They're the winds at our back and measurement error is the variability in our measuring eye pressure and how patients perform on the visual fields. So here's an example of a patient where the first five visits the Kalman filter is learning how that patient is behaving. It also in the background has lots of information that's been fed in from patients who are involved in clinical trials that is helping guide how the Kalman filter will predict where they'll be. And the red X is predicting the next point in time. So the first five measurements predicts the measurement six and then this first six measurements predicts measurement seven, etc. The blue, the blue triangles rather than just predicting the next point in time it's predicting five years into the future. And you can see for this patient the filtered and the predicted values pretty closely aligned with the observed value for this particular patient. And it's a pretty stable patient not very difficult to predict. Here's another patient. This is a patient whose glaucoma is progressing. And you can see that the Kalman filter does a pretty good job in terms of the filtered and the predicted values versus the observed values. In terms of so one can also take these Kalman filters and train them for first several periods and then at period five say what happens if this patient's pressure is going to stay at 15 for the next five years? What's going to happen to their mean deviation? What if their pressure were to get down to six and stay at six or what if it were to go up to 24? And you can see for this particular patient who is a pretty rapid progressor if we can get their pressure down to six or nine their mean deviation is relatively flat but if their pressure is up in the high teens or 20s they essentially fall off a cliff and go blind. Here's a different patient this patient you can see whether the patient's pressure is nine or 15 or 24 the change in mean deviation is very there's very little difference. So you know one can personalize these target IOP menus and then be able to work with the patient and decide how aggressive you want to be in terms of management. Okay, shifting gears now let's talk about predictive analytics and ophthalmology. How many of you guys are familiar with the movie Money Ball? Most of you. So for those who are not this is a movie about the Oakland athletics in the year 2002 and that year they had the third lowest payroll of all the major league baseball teams. There were teams like the Red Sox and the Yankees that had you know like a hundred thousand a hundred million dollar higher payroll and despite having such a low payroll they outperformed almost all the other teams and almost won the World Series that year and the question is you know how are they able to do so well with a team with such a low payroll and the answer is that they tapped into what's called Saber Metrics. So while some of the other managers general managers were basing decisions on common metrics like runs batted in stolen bases or wins the A's were digging deeper into the raw data to find better metrics to judge the value of a player when to play them in a game etc. So here's an example of predictive analytics in major league baseball. This is a hitter Bryce Harper and you can see that one can dig deep into the raw data and you can plot season by season against left or right-handed pitchers every single at bat and every ball that Bryce hit across the entire time and one can imagine as an opposing manager you can see based on these sorts of analytics oh he likes to hit the ball more to the right side maybe I'll adjust my defense accordingly. Here's another example this is a picture on the Dodgers Clayton Kershaw and you can see one can filter by season by game type by the venue against left or right-handed batters how many men on base a very granular detail and you can generate these heat plots of exactly where he likes to pitch the ball and where the hitters like to hit the ball and really get fine detailed information. So what I was thinking is well if professional sports teams can take advantage of safer metrics and predictive analytics why can't we use these similar techniques in medicine and ophthalmology? So this is a predictive analytics dashboard that we developed for source and it's kind of similar to the ones I just showed for the major league baseball players where you've got for example this is predicting reduction in eye pressure after SLT or laser trabeculoplasty you know so we've got the preoperative eye pressure and the postoperative eye pressure and every dot represents a particular patient or an eye that's undergone the procedure and you know one can stratify by how far postop one wants to look you know the preoperative diagnosis how much of the drain how much of the angle was treated who the surgeon was the demographics of the patient and you know here you can see say you have a 63 year old white male with exfoliation glaucoma come into your clinic and you're thinking you may want to do a laser trabeculoplasty on them wouldn't it be great to be able to pull up on your screen something like this where you can show all the past patients that you've done this procedure on and you know if you look at the blue line and below that those are patients who had no reduction in their pressure or their pressure went up after the laser and then the second group had a small reduction a medium reduction and the ones all the way on the side here had a greater than 40% reduction and you know you could pull up patients with a similar profile and be able to you know show it to your patient and have a shared decision making and say you know here's other patients like you you know do you think would you like to proceed with the laser or not similarly from a research standpoint with these plots you know one could say what's going on with the patients in a versus the patients in B the patients in a had a great response to the laser patients in B had almost no response or their eye pressure went up and try to figure out what characteristics lead someone to be a strong responder versus a poor responder to the laser this is the latest dashboard it's kind of a small and busy slide but this is a incisional surgery dashboard you know again where one can look at eye pressures pre-op and post-op and you know look by glaucoma drainage device versus migs versus traps tubes we can you know we can stratify by race by pre-op how much post-op period so all sorts of interesting ways of looking at the data the last example I'm going to give is a machine learning algorithm to predict CME after cataract surgery so this is using some data from source and we identified all adults undergoing cataract surgery over four-year period we excluded those who had fewer than 90 days of follow-up we excluded patients who had pre-existing record of any type of macular edema and we only included those who if you had your second cataract surgery within the 90-day window just to because that would make it kind of complicated to see if to tell if the edema was in the first or the second eye we excluded those patients and we were able to look at the outcome of interest looking at you know the information from the macular OCT scans and we also corroborated that with information from the electronic health record as to verify that they had CME and then all the different potential risk factors that we were able to put into our models included demographics social determinants of health medical and ocular comorbidities ocular characteristics details of the surgery complications so this is a sort of information we're able to pull out there's a lot of details that could be relevant and we built this lasso regression model so we took all 73 of the features and put them in the model and we identified 29 features that best predicted which patients would develop CME and then we had a training set and a validation set and a holdout set and we trained a random forest and elastic net blender model and we compared those to standard logistic regression. You can see we had over 10,000 patients in the training set in 2600 and holdout set and here you can see that our our two machine learning models did better than the logistic regression in terms of fewer false positives, better sensitivity, better accuracy in higher area under the ROC curve. Here you can see graphically the two machine learning models are in blue did better than the logistic regression in yellow. In terms of the features most predictive of CME we found age at the time of surgery was the most predictive feature who the surgeon was was the second most followed by some characteristics of the area deprivation index, community distress index measure the level of affluence of one's community of residents and then these are some other factors. Here we can see patients over 70 the risk of CME was much higher than those under 70. Here you can see each you know the identities of all the surgeons are masked in source but we can tell all the cases done by a particular person and the one the other category are surgeons who had very few cases so we kind of lump them all together and those are the ones that had much higher CME risk. This is again looking at social determinants of health and communities actually the patients with the with scores close to zero are those living in more affluent communities their risk of CME is higher than than those in the lesser affluent communities. Here patients with low BMI were predicted to have higher CME this one's kind of interesting birth month and CME and when when it came out I first looked at it and I'm like this is probably just some random garbage but you can see that patients born in the months of July, August and September had higher rates than those born in other months but when I looked a little more into it there may actually be a plausible explanation and there's some thought that maternal exposure to vitamin D during pregnancy can affect one's risk of inflammation and there are various inflammatory conditions that patients born at certain times of the year are more prone to and that may may be explaining what's going on here so you know once you have the model you know you can apply it to all the patients in one system. So every every box represents a patient undergoing cataract surgery and you can you know click on that box and you can see the different factors that predicts his or her risk of developing CME this would be a nice way to you know have discussions with patients maybe if you're a high risk you may want to prophylactically use NSAIDs or something like that but it would help with decision making. So in conclusion digitization of personal data advances in linking the data computing power storage data analytics has really made it easier to take advantage of big data and ophthalmology repositories like source will provide researchers with access to more grant granular clinical data than ever before linking EHR data to other POTS the data will expand the sorts of projects and analyses we can do collaboration across sites is key their strength in numbers and it enables us to to enhance the diversity of the patients that were able to study study uncommon conditions and bring these tools to researchers throughout the country and then finally we need to develop approaches that are patient centric that facilitate shared decision making and can easily be integrated into our busy busy clinical practice settings. So I'll stop there and there's time for a few questions and understand if people need to leave too. So, yeah, so I appreciate you bringing money ball. You know, I kind of hear money ball and everything nowadays. Yeah, but so so Billy Bean when he found in that whole market is we undervalued and overvalued incorrectly. You know, someone looked like a baseball player. They got paid more. They got threatened early versus the kind of odd looking folk that had a higher on base percentage and no one cared about on base percentage. So as you've gone now through glaucoma in detail, are there things you think that we really miss as far as overvaluing or undervaluing metrics with the ultimate question of what is their likelihood of progression? Yeah, I think I think we haven't fully like we haven't fully filled out very gradually. Yeah, I think so many of us and myself included, you know, eye pressure is such a big driver of our decision making. And then sometimes patients that get referred to be, you know, like the patient hasn't even had a visual field in three years. And the referring doc, you know, is looking at the eye pressure saying this patient needs surgery or something like that. So I think there's, you know, glaucoma, we're able to capture lots of information for OCTs and visual fields and we would use some visual cord outcomes. So we haven't really fully built into what are the, in my opinion, the best metrics to identify stability. So lots of opportunities for folks to develop now. Did you intervene for whoever the surgeons were at the high range or like look and see what was, were they all doing complex cases? Were they resident cases? Were they all like disaster type? You know, like what the, you know, other than the surgeon? There were all surgeons who did not have high volumes. They weren't more complex cases. I mean, we had complexity as captured, complex billing cone. We also had factors with Iris hooks and things like that. So I suspect that they were probably just taking longer to do the case and more manipulations. And you know, that probably is contributing to the CMA. But we haven't fully explored that. And, you know, we're mindful all the, all the surgeons are, we're masked to the identities of all the surgeons. You know, it may be possible to do some sort of like quality improvement thing where you work with a given department and everyone buys into it. And, you know, you can, all the surgeons can get a report card and see where they are and maybe the surgeons who have the lowest rates, you know, want to talk to everyone else and say, here's how I do things. And maybe coach the, leave the honors. We haven't really tried to do any of that right now. We're just trying to build the models and identify what the characteristics are. Any other questions? Got one in the chat. And we use AI to do our billing. It seems like there are many inaccuracies in our current methods, how we measure work, et cetera. Yeah, I think that's a great question. I mean, I think that as we get better at, you know, if we're taking the time to document in our clinical notes, what kind of conditions patients have, you know, if it would seem to be like someone can build, someone can build algorithms to locate the various pathology and flag it. Instead of us having to type in, you know, the patient has these six conditions that can be found in the node that might make it easier for us to build for more conditions. I mean, I think there are some companies out there that are leveraging machine learning. Maybe not in a model space, but you really have to be careful and make sure that the algorithms work well, because I think there's at least one company algorithm where they got sued by CMS because they were reporting a whole bunch of stuff to get higher reimbursement. And when the algorithm wasn't working well and the patients didn't have some of the conditions, then it was actually like false claims and whatnot. So I'm sure that there are opportunities that just needs to be done in a very rigorous way. Anything else? Well, thanks again for having me. I think I'm going to do a research seminar later today in which case I'll be talking more about some of the details or so. If you're not busy with C patients, you're going to learn more about that area.