 Hello everyone and welcome to our next enterprise data world session case study and data quality and information analysis which will be presented by Michael Schofield and a Soma Linda University. All audience members are muted during these sessions so please submit your questions in the Q&A on the right of the screen and our speaker will respond to as many questions as possible at the end of the talk. Please note that there is a link form at the bottom of the page titled EDW Conference Session Survey. This is where you can submit session feedback and we encourage you to do so. So let's begin our presentation now. Thank you and welcome Michael. Good morning to people in California and good afternoon to people in Boston. I hope you can see my second slide. This is a little bit about my background and that's my official photograph with the School of Nursing where I co-teach and my bachelor's was in physics and my MBA was from UCLA a long time ago. I've been on the speaking circuit in data management for about 23 years and very fortunate to have the opportunity to speak all over the country and twice in Australia and seven times in London. I get to go to museums and somebody else pays the airfare. I've been using this model the data to understanding supply chain that starts with reality and then facts and data describing reality which we convert into information. That information can be expressed as John Ladley pointed out and he wouldn't use these words not all expression effectively communicates and that communication isn't really complete until there is understanding of the meaning in the mind of the audience or the intended listener or reader or something like that. So facts and data are observations of reality and the kinds of observations that we have in public health include the human senses and devices like clinical thermometers in the lab and blood that gives us blood contents and things like that and that's the subject of raw data quality which I will not go into today. That's a whole hour topic. Actually it's a three-hour workshop. In the pandemic the discrete events that we observe are tests and hospital admissions and things things like that. There's also patients lingering symptoms which we are going to talk about later. All data to information requires regulations, aggregates, ratios, rate of change and things like that and one of the things in COVID-19 is when you're looking at a state total for any particular day you've got to ask did all the counties report this morning or are we under stating what's really going on in the state and so this is information quality which is different than raw data quality another one-hour topic. I eat lunch in the hospital across the street from the school and nursing with my friends and I saw this t-shirt come through the checkout line that says without a clinical lab you're only guessing. I like that. They're basically a message to some of the physicians who diagnose more intuitively. Now a couple disclaimers. I have no formal certification in epidemiology or virology. I'm not a physician and I'm kind of a neophyte in mathematical statistics but for a while I worked at the US Army Medical Research Institute for Infectious Diseases and learned a lot about viruses there. I've been in the data management space for 30 years and particularly in data quality assessment for 25 years and this topic is a moving target. It changes over time but good science is that way. Always discovering something new to improve our understanding and a lot of people in America don't understand that science is going to get smarter and so the message may change and I reserve the right to be smarter and better informed tomorrow than I am today. Now we have had some experience with 1918 flu. It came in multiple waves. We're seeing that today. It mutated over time. It killed selected age groups. Children under 10. Very different from today. 3% of all factory workers in the US died. 6% of all coal miners were killed but it barely touched the elderly. Very different characteristics than what we're dealing with today. But one characteristic in common is the more people are infected, the more opportunities the virus has to mutate into something more deadly. Now to put this all into context, the Spanish flu in America killed about 679,000. World War II, all the American deaths only amounted to 419,000 but with COVID-19 we're up to 569,000. It was what I saw this morning. So we're already 130,000 more dead in America due to COVID-19 than were killed in World War II. To put this in an international context, the United States has 4.2% of the world population. Back in February, we had 20% of the world's deaths. It's about the same today because Mexico and Brazil and India are seeing a large growth in deaths. Now I know people who think that this is a hoax. The whole pandemic is just a hoax. The New York City Department of Public Health conducted a study for a 52-day period where they counted 32,000 deaths. Now they know their territory and they know their seasonally expected number of deaths, their baseline, for that same 52-day period was only 7,900 which meant that they had 24,000 deaths over baseline. And 57% of those were laboratory-confirmed COVID-19. Others, by basis of the symptoms, were probable COVID-19 and another 22% needed further study. So this kind of puts a lie to the propaganda that it's all a hoax. They had to bring in extra trailers, refrigerated trailers to hold all the bodies. Now a couple weeks ago, we got this chart from the CDC giving us the total situation in the United States during the calendar year 2020. And so the baseline for all causes of deaths is relatively flat. It goes down a little bit when you get into the spring and summer. And so the number of deaths in excess of the baseline is very visible in this chart. The COVID-19 first spike, particularly in New York State, was there in April and then it started to go up after Thanksgiving. So COVID-19 is now considered the third largest cause of death at least in 2020. Right in January and February, it was the largest or the most frequent cause of death in the United States. Now these charts of the number of cases and the number of deaths here in the United States total, the solid line is the 7-day moving average. I find that much more useful than to see the volatility of the weekly and reporting and the non-reporting over weekends. In California, as of March 31, this is what the shape was. So we were really hammered with deaths after the holiday season. And as you can see, there's a bit of a lag between the peak of the deaths and the peak of the cases of infections. So that's important to know. The deaths will not appear immediately as a metric. Looking again at this internationally, South Korea had 2.8 deaths per 100,000 people while the United States death rate per capita is 51 times that of South Korea. So at that rate, if the US had responded, if the government and the authorities had responded as effectively as the South Korean government did, there would be today maybe 12 or 13,000 Americans who would be dead, not 550,000. But the South Korea culture is different than the American political landscape. A more useful metric than just the number of cases is the morbidity. If infected, what are the odds of death? And currently, as of this morning, it's 1.78% of all people who are infected die. If the infections are undercounted, and this is one of the vicissitudes of the statistics here, if the infections are undercounted, then the denominator is larger and the death rate would go down. But as of this morning, using this morning's numbers, this is where we get to the 1.78% infection. It was 1.81 a month ago, and it's 2.11 worldwide. And the reason for that is that much of the world does not have as good a acute care health hospital system as does the United States. But that figure is really too simplistic, because the actual death rate may differ for subsections of the population. So morbidity varies by subgroup, it varies by gender, ethnicity, age brackets, and if the person has any latent health issues like diabetes, obesity, hypertension. So if we were to publish statistics, we would have to expand that 1.81 that was last month. We would have to be more specific by gender and by ethnicity or by gender and preexisting condition, diabetes, cancer, obesity, or something like that. Now, there were some who said that COVID-19 was no more deadly than the seasonal flu. Well, the statistics don't bear that out. It's actually about 20 times more deadly than the seasonal flu. Now, the World Health Organization estimated 3% to 4%, but that's over the entire world with many countries having a lesser quality response in terms of health care. This Lancet publication from the UK said if infected seniors have a 5.6% odds of dying, where 20 to 49 have a very low chance of death. It's still there. Minnesota Public Health gave us a histogram of the total number of cases. Now, this is cases, not deaths, and by age bracket, five-year age bracket. And so the highest infection rate, not death, but infection rate, is with 20 to 30-year-old people. Well, are they reckless and responsible? The ones I see surely are, but they're also more active in the economy, particularly in interfacing with customers. Now, this very gratuitous photograph is of me donating blood. And I do this every 56 days. They love to see me coming because my blood type is O-negative, CMV-negative. And what does CMV-negative mean? I have no antibodies to the cytomegalovirus. I have not been exposed and my immune system did not develop antibodies, and they don't want those antibodies in blood that they give to low-weight newborns. So I have to fill out a questionnaire about my behavior and my health history before I can go in and they stick the needle in. And it hurts. It's a big needle. But before they use my blood, they test for all these conditions and diseases before they can put it into the pool to use on real-life patients. Now, what is a virus? It's a protein shell surrounding some genetic material, DNA or RNA, but it is not alive. And our language sometimes belies this. It cannot reproduce or divide. It requires a living cell to replicate itself, and then it kills the cell. So the virus would enter the membrane of the cell into the cytoplasm. The protein shell around the virus dissolves. It releases the genetic material, which combines with the genetic material in both the cytoplasm and the nucleus, and it expels one or more, shall we say, daughter virons. And then the cell dies. So this is how the virus replicates. Gotta talk about antibodies. Antibodies are created by the amazing human that detects this invasive pathogen of bacteria or virus and creates some proteins that attack it. So for every specific kind of virus, the immune system develops an antibody to attack and eliminate that virus. And sometimes we see the antibodies in a blood test and the virus is gone. So that tells us that the person had been infected with a particular virus, and the antibodies basically eliminated it from the bloodstream, but the antibodies are still there. And so that's, that's an important way to measure. Now a typical life cycle, someone gets the COVID-19 and they may get sick and there's some delay before they show symptoms, or they may get sick and die. And like I said, that's 1.8% of infections. Or they may not show any symptoms at all. And this occurs in some younger people and it actually deludes them into thinking they're invulnerable. Now in the period of time before they show symptoms, they're pre-symptomatic, but still contagious. We have a test that that's where we put the swab up the nose. The victim doesn't know they have the virus unless they get tested, and so they're contagious and can spread. That's why we want the face mask. So up until the time that symptoms are visible, we use the antigen test. After symptoms are visible, we use the antibody tests. We use blood tests to detect the antibodies. Once they have the antibodies, are they immune from further infection? That's still being studied. The Imperial College of London study said perhaps 90%, but otherwise these people could get reinfected. And so a secondary infection, it has happened. We don't know why the antibodies didn't attack the secondary infection, or had they just faded away. And so this is a major unknown and it undermines our achieving the goal of herd immunity. Did the virus mutate? We don't know without doing genetic sequencing. And that test is a little bit more expensive. Now there's another life cycle. A friend of mine works in the ICU, ICU nurse and boy is she stressed out. She had a patient that came in, was in the ICU on the ventilator for a while, somehow recovered, went home, and three months later died from organ failure. And this is being seen more and more often. So even if you walk out of the hospital after a severe infection, you may not be out of the woods. Now the original tests, putting the swab up your nose, had a quality issue. In a perfect world, if it's present, the test should be positive. If the virus is absent, the test should be negative. But there are false positives, depending on the kind of tests. But what's more worrisome is there are false negatives. These people think they are okay, but they're not. They go around spreading the virus. I love this photograph. We would not see this today. But I saw an excellent video on the JAMA website where Michael Osterholm, and he's one of the experts in this, said that the rapid detection test used at the White House back six months ago, nine months ago, gives quick results but a high rate of false negatives. So it's not very reliable in telling us what's going on. Now, when infected, the Vyron count has a curve something like this. And the antibody count kicks in a little bit when there are enough Vyrons to trigger the human immune system. And so these antibodies eventually kill off the Vyrons and that count will eventually go to zero. The test sensitivity is an issue. For example, with that Vyron count curve looking like that, a low sensitivity test with quick results has a very high threshold. It needs a lot of Vyrons to trigger and give it a positive test. So in the period of time before the Vyron count is high enough, it's going to have false negatives. Whereas a high sensitivity test has a lower threshold, but still for the first two days, it's going to give false negatives. So no test is absolutely perfect. I hear a lot from the start of the pandemic, we heard about cases. And these are infections. And I found myself yelling at the TV screen, what was the denominator? How many cases, how many tests did you do to yield that number of cases? And so to illustrate that problem, let's just say we had three different groups that we ran tests on. And we got 15 positive with patients with symptoms. And then we got 40 positive with a group of at-risk hospital workers. And if you remember back in April and May, those were the people we were doing the testing. And then we had voluntary drive-through testing. Let's just say we got 20. So in the county, we have a total of 75 cases that are reported up the data chain to the state health department and the CDC. Well, I want to know what the sample size is. And in the patients with symptoms, the sample size yielded a positivity rate of 75%. In the at-risk hospital workers, we had 8%. And in the drive-through testing of self-selected people, it was only 2.5%. So in reality, the actually it's a 2.5% rate that I would consider more significant. The ideal situation is for public health officials to conduct a random sample of the community. And let's say we did 2000 tests and we got 25 positives. That's a 1.25 rate of positivity. If we multiply that times the half million people in the county, we would estimate that there are 6,250 people in the county with infections countywide. Unfortunately, we don't know who the other 6,000 people are. And they don't know who they are. They don't know they're infected and contagious. And then there's the delay of data gathering and aggregation. So the funeral homes and the hospitals report to the county health department. There are 3,000 counties doing that in the U.S. That may be reported to the state health department, but it also may be reported to the CDC. The CDC passes its numbers to Johns Hopkins. And that's for many months, the media, the news media went to the Johns Hopkins website to get their statistics. So this is what I call the data to information supply chain. On the left, we have raw data. On the right end, we have meaningful information. Gotta talk about co-morbidity, a medical condition existing simultaneously, but independently with another condition in a patient. And so for you data modelers, cause of death is a multivariate valued attribute. And so if we look at a certificate of death, and this comes from the state of Utah, and there is provision for three different causes of death. They can be chained together or they can be simultaneous. And so when somebody dies and they are declared death either by a ambulance driver or a physician, there may or may not be an autopsy on the body, which results in blood tests and tissue examination. And eventually a medical examiner, coroner, will finish the test and fill out a death certificate. But there could be some delay in that death certificate. So that's why we have some delay in reporting the number of deaths to the CDC. So if we had a situation where COVID-19 was clearly one of the cause of death for a patient that had these pre-existing conditions, then an argument ensues, and here's where it gets political. Should this be included in the COVID-19 death count? Some would argue, no, they would have died anyway. Others would argue, yes, but they could have lived longer without COVID-19. And so that no argument is used as an excuse by some politicians for under reporting or underestimating the death total. So basically you cannot manage a public health situation without metrics. And so we test for two reasons. On the macro scale to get the big picture and to conduct public health management. On the micro scale to alert the individual who may be contagious and advise quarantine and contact tracing. Both reasons save lives. Early on we heard about flattening the curve, and this was one of the reasons why we wanted to shut down the economy. The flattening the curve doesn't necessarily reduce the total number of deaths. Getting everybody vaccinated does, but flattening the curve, the area under the curves, the two curves are the same, it just spreads out the workload for the hospital and regulates the number of cases so they don't overwhelm the emergency room or the ICU. What would reduce the total number of deaths, high volume of random testing, frequently and of course subsidized. I think we're pretty much on top of that. Rigorous and aggressive contact tracing. I haven't heard much chatter about that. Not sure why. Selective quarantine. You can tell someone they're infected, but will they quarantine? People who deny the existence of the virus will probably not quarantine and go ahead and infect and possibly kill other people. Encouraging protective behavior. Here is a man who would have difficulty social distancing. When I spoke at the DAMA conference in Austin, Texas, I took an afternoon to go to the LBJ library and I saw this photo up on the third floor. I like this is typical of a very assertive politician. The stages of patient status and events. It can go something like this. A person can be tested. If they test negative, great. If they test positive, then they may or may not have symptoms. If they do show symptoms, even some people never tested show symptoms. They may recover or they have to go into the hospital. Some who are hospitalized may just get a mild case and walk out. Others have to go to the ICU. Some in the ICU recover and some in the ICU dead die. We also discovered that there were a lot of folk, elderly folk in New York, who never left their apartment and just died in their apartment, but they had been infected. This raises all kinds of questions about public health management. How soon afterwards should we test a person who tested negative? If they test positive, how do we monitor their symptoms? If they recover, the big question is, are they still infectious? Could they develop symptoms later or could they become a long hauler? It's the long haulers that become a new problem. But first, when we use the word risks or odds, there's risk of being infected. There's risk of getting sick. There's risk of infecting others. There's risk of being hospitalized. Even smaller risk of going on a ventilator. You do not want to do that. There's a risk of death. Each of these has a distinct percentage along with long haul symptoms. Before the vaccine, there was a certain percentage of risk on each of these. After the vaccine, hopefully on all of these, the percentage of risk should go lower. But when you enter or experience a new mutation, like the B117 coming out of the UK, all these odds change. We have to express the risk in context of the whole situation. Now, the metrics reported in the press were just the number of cases and the number of deaths. And I've already said that cases without knowing the denominator is not as helpful as it should be. My ideal situation is to have metrics on every stage. And some states are doing this in their public health department. Many of these metrics have a daily and cumulative nature. And they could be used to calculate the trends and ratios and the odds of moving on to the next step. In other words, the risk. But the problem is that capturing these additional metrics is work. Costs money for the health agencies, hospitals, and data integrators. And what motivates them to do this correctly? And here is a classical data quality issue. When humans are involved in capturing data points and facts, what motivates them to do it correctly? That's a whole hour long topic. The trade-off between burden and benefit is something that gets argued over quite a bit. And then we have the long haulers. And approximately 2.2% of the people who were hospitalized are experienced long haul symptoms. And they're mostly young. They're mostly female, which introduces another cultural challenge for healthcare. Over 100 kinds of lingering symptoms. And I'm not going to read all these. 40 to 60% of hospitalized victims experience neurological symptoms. And 74% had persistent symptoms, fatigue, shortness of breath, up to 12 weeks after discharge. So how do we gather and post data on this phenomenon? So, and of course it's more complicated than this. And here I'll go into very briefly, very quickly, some of the elements of the data model for a hospital, a clinical system for a hospital. And we have kernel stable entities. We have episodes. And we have events. And we should understand the difference between these. And beyond this level of detail, proprietary EMR architectures differ. Illnesses are not mutually exclusive time wise, so the patient can have multiple episodes of the same disease or multiple diseases at the same time. They can be hospitalized in multiple episodes, but generally a hospitalization and hospitalizations are mutually exclusive. So a patient can have two hospitalizations. There are distinct events of when they're diagnosed, admitted, sent home, readmitted. And so within the hospitalization experience, there could be different unit assignments as they get worse. They're moved up to the ICU. So we would see that that episode as a distinct kind of episode. And there are events marking the change from condition to condition. So the unit assignment is a sub episode. And then the treatment is a part of a regimen. So there are treatment events that occur at a particular point in time. There are medical tests. They are generally events that occur in a particular point of time as our diagnoses. So those are some of the essential subject entities in an EMR database. Which of the patient's medical events should be reported externally? And again, there's a trade off between the value to the physicians and the public health management versus the cost of reporting. So for many diseases, the typical trajectory was you get well, the symptoms abate over time. And so generally, most health professionals don't track that. Once you're discharged, they don't call you up a week later. At least my doctor doesn't call me up and say, how are you? Doesn't happen in this environment. But with long-haul lingering symptoms, a big question is how do you digitize this curve or express it as a metric? And part of the problem with particularly the cognitive fade in long-haul is that there aren't many objective metrics for it. Yet the patient may be so sick that they can't go to work and they lose their job. And that's happening an awful lot. So can an EMR database contain patient conditions not observed or confirmed by a medical professional? That's kind of like crowdsourcing data. Are we willing to do that? There's no formalized name for the wrong-haul, no ICD-10 code for it. Hence, it's not recognized. This is a frustration. These people can't get state disability because there's no confirmed code for the long-haul symptoms. So the medical community must recognize it. And the shape of these symptoms can change. There's all kinds of shapes. How do we capture those shapes in a consistent way in the EMR database so that we can aggregate the characteristics for predictive purposes? So this creates some new database design implications. Are the characteristics of COVID? I'm going to skip over this because I'm watching the clock. These are some of the... If we were in the same room, I'd throw these out and we could have a discussion. But I want to talk a little bit about the vaccine. There are issues of effectiveness. Does it prevent infections? Does it trigger antibodies to the threat? And do those antibodies stick around for a while? And is it safe? Are there minimum immediate side effects and no long-term consequences? A typical development life cycle for a vaccine takes a number of years. I want you particularly to look at phase two, where they do trials for safety and immunity. Typically, that takes two to three years. We are dealing with vaccines that were warp speed developed. There are over 178 distinct vaccines under development in five categories. I'm not going to go through that. But at the current time of all the big pharma companies, these four have developed vaccines that have gotten emergency approval. There are all kinds of potential side effects, which is why you want a large sample in your trial to catch these side effects that may have a very low frequency. You want the sample size large enough. And then there's the issue of safety. Now, for example, when I was a kid, I got the polio vaccine. That's been around for 60 years. We have pretty much absolute confidence in of the polio vaccine. There's been no sustainable discussion to the contrary. Now, the efficacy of a vaccine for COVID-19 will rise a few days after the injection. So you still have to be careful and wear your mask and do all the distancing. And then the efficacy becomes flat and may decline after a period of time. But we don't know because it's too soon to tell. We have not had enough experience over time with these vaccines. So one way I would graph it is a range of uncertainty that looks something like that. But we could have two different vaccines with two different characteristics of efficacy and lasting power. And when this occurs and when we have this kind of data and we don't have it yet, then would we allow the patient or the consumer to choose between the two vaccines? Well, time will tell, but we can't wait. So the future, the vaccine efficacy may fade. We may have to have booster shots. The virus may mutate and then all bets are off because we're going to have to develop new vaccines. But fortunately, the manufacturing and technology and resources are in place. So a vaccine efficacy, what we do know about the Wuhan strain is that Pfizer is 95% and Moderna is 94%. At least those were the original statistics. But we don't know the efficacy against some of these other strains. Now, this is the campus of Loma Linda University. We have this very large hospital. We have 17,000 employees. We see a million patients a year. And for some of the employees, particularly the nurses of childbearing years, it has been tough to persuade them to get the shot. I had no hesitation once all my friends walked back and seemed to be alive to me. And of course, being a health sciences university, the employees had a priority. There are new risks. The original antibody can destroy the original version of COVID-19. But with a second version, a mutation, new cases are causing a surge in the UK. Will the antibodies developed attack the mutation? Or a third mutation, will the antibodies attack that? That's why we have to continue to gather data. And the matrix gets to be a little bit more complex. With each mutation, we have new statistics on symptom severity, reinfection, and the long haul characteristics. Are we ready for the next pandemic? I don't think so. The public health infrastructure in America has been severely underfunded for the past few years. And so that's a major question. And what else could come down the pike? And I see we're at 42 minutes. We know about a lot of diseases, but there are cultural biases undermining data quality. And I've worked in a number of organizations where if the data doesn't confirm the executive's expectation, he may say, don't capture that metric, or don't report that metric, or they may cook the numbers. Now, here's an excellent example of politicians cooking the numbers. These are daily clusters of the five colored bars or the five counties in the Atlanta metropolitan area. And they published this on the Georgia Department of Public Health website. And you'd have to look carefully to notice that they sequenced the days, not in chronological order, but in order to convey the impression that the number of cases was going down. This was a deliberate misrepresentation of reality, implying a downward trend. So the situation, the challenges we have highly distributed data gathering and reporting, inconsistent timing of aggregations leads to highly volatile metrics. We saw that. There are occasional ambiguities in the cause of death. We discussed that. And retroactive collection of statistics, some stats are retroactively restated by some reporting mechanisms and the mutation of the virus complicates testing. And so the key takeaways, we have to consider statistics carefully. The face value of what a reporter is saying may not be accurate. When cases are cited, we need to ask what's the denominator? How big was the sample? And how representative is the sample? In any aggregate of cases or deaths, we need to ask are all the constituent parts reporting? In a state total, did a third of the counties not report this morning? We have to properly normalize statistics such as showing the rate of deaths per 100,000 population. That allows us to compare large countries with small countries. And so that's basically the end. And I see we have six minutes for discussion. And I got to figure out how to unshare my screen, unless you... Oh, you're okay, Michael. You got it. Thank you. Yep, we're good. Okay. So yeah, we do have a couple of questions that have come in. Thank you for that great presentation, by the way. The first question is, are there any current organized efforts out there to improve the quality of COVID-19 data, such as definitions, data models, timeliness, et cetera? Yeah, there are... The previous hour in this conference, one speaker told about what they're doing for the state of California Public Health Department and creating a dashboard to report the cases or to report all the metrics. And there are increasing number of metrics, hospital admissions, ICU admissions. Our hospital across the street of... What was it? Five weeks ago, we had 230 COVID-19 patients. Right now, we're down to 20. That is an amazing volatility. But there are efforts. I think a lot of people... And I think the CDC is being taken more seriously now than it was three months ago. And they're being given the resources to improve the statistic reporting. I hope that answered that question. Yeah, that's great. Thank you. Let's see. What do we have? What percentage of the people I think have the health literacy to understand many of the ideas you describe? And how can we improve the health literacy of the general public? Do you see why I used pictures? A lot of people don't understand this. And I think it's part of the dumbing America. But health literacy is a big issue. And that's worth talking about. But I don't have a neat solution to it. Great. Then we've got... Let me see. Do you think this pandemic will trigger a digital transformation in this pandemic management sector? I think it will now, yeah. I think once the death rate sinks in, and there's still a lot of denial about it, or just ignoring the fact that you have bodies dropping all over, once that sinks in, I think politicians are going to be interested in better funding public health. And the broad public health includes record keeping, tracking, and surveillance. So we should have had people on the ground in China to give an early warning that something was coming. And that's a whole other topic of surveillance. We did have some people, but not as many as we needed. Great. Thank you. And we've got about one minute left, so we'll give you one more question. What do you think was the biggest lesson learned from this past year that we can prepare for before the next pandemic in terms of gathering and defining data and some of the stuff you've talked about? Yeah. I don't think... Well, first of all, testing. We know when people die, but until we had a high volume of testing, we didn't know when they were infectious. So that's an important data gathering. But there's got to be more respect for science and respect for expertise and to take this kind of thing seriously. And that respect has to grow in the politicians as well as the general public. I'm getting a little political there, but that's what I feel. Great. Thank you, Michael. So that's it for today. Thank you, Michael, for this great presentation, and thanks to all of our attendees for tuning in. We were up to, I think, over 30 or so at one point. It kind of goes up and down. Yeah, it kind of goes up and down. But I think we had a lot of good questions today. To everyone, please complete your conference session survey at the bottom of this page for this session. And the next session, and actually the last session for the day, we'll start in about 10 minutes. And Michael, thank you very much, and you have a good rest of your day. Take care.