 Hello everyone, I'm Doris McMillan and welcome to the Data Assessment and Verification Project, commonly called DAVE Satellite Broadcast. The goal of today's broadcast is to provide you the viewing audience with a better understanding and purpose of DAVE. We'll start out with a welcome from CMS's leadership on the importance of this broadcast. Then we'll move into an introduction to DAVE, followed by a presentation on how the data is analyzed to support the clinical review processes. This will be followed by presentations on the clinical review process and medical records walk through. After these presentations, we'll provide you with an opportunity to ask our panel questions. The number for calling and faxing us with your questions will be given out a little later. Now that I've told you a little bit about what you can expect from today's program, let's get started by having one of our two official welcomes. First, let's hear from Steve Pellevitz, Director of the Survey and Certification Group. Steve? Hello, I'm Steve Pellevitz, Director of the Survey and Certification Group in the Center for Medicaid and State Operations here at CMS. Along with my colleague, Deb Taylor, who you will be hearing from shortly, I'd like to welcome you to the Satellite Broadcast where you will learn more about the data assessment and verification contract known around CMS as the DAVE. You'll be meeting many of the DAVE staff throughout the broadcast as they share with you information about the DAVE process that CMS is implementing to assure the quality of the MDS data used throughout our programs. Since the development and implementation of the MDS in the early 1990s, CMS, in conjunction with the state agency RAI coordinators, has conducted provider training about the MDS instrument and its use in the resident care planning process. The goal for all of our training and education efforts has been to assure that the MDS data which nursing homes collect accurately and consistently reflects the resident status so that appropriate plans of care can be developed and monitored. Once the MDS was integrated into facility and state agency operations, CMS recognized the value of using this standardized assessment tool to support other essential program areas such as payment and quality monitoring. The MDS has become part of the Medicare payment system and, in some states, the Medicaid payment system. In the survey and certification program, we have implemented MDS-derived quality indicators that allow state agencies and the facility to continually monitor certain aspects of nursing home care and quality. And just recently, as part of the Nursing Home Quality Initiative, CMS began publishing quality measures for all nursing homes derived from the MDS data to assist consumers in making decisions about nursing home care. With our increasing reliance on the MDS data, CMS recognized the need to establish a centralized and coordinated data verification program. CMS awarded the DAVE contract to Computer Sciences Corporation in September 2001 to accomplish this task. As I mentioned earlier, the main objective of the DAVE program is to assure the accuracy and consistency of the MDS data. CMS is committed to implementing this program in an efficient and effective manner so that DAVE information can be used to help direct program operations, to help focus our MDS education activities, and to help support provider and data quality efforts. Thanks, Steve. Next I'd like to have Deb Taylor, Acting Director of the Program Integrity Group in the Office of Financial Management, provide you with her perspective on the importance of today's broadcast. Deb? Good afternoon. I would like to join Steve in welcoming you to our presentation this afternoon about the DAVE project. As Steve said, accurate MDS data is of vital importance to us, especially as you might expect as it relates to the Medicare payments. The main goal of program integrity is to pay claims correctly, that is to pay the right amount to the right provider for the right service to the right beneficiary. We strive to safeguard the Medicare trust fund against claims that are inappropriately paid. We do this by making certain that payments are made according to Medicare rules. Under the perspective payment system, it is also important to validate the accuracy of MDS data since this data determines the amount paid to a facility. As you will hear during the presentation, the DAVE is looking at national MDS and associated Medicare claim data. The following will offer CMS insight as we consider future policy and program enhancements. As I said earlier, the goal of program integrity is to pay claims correctly, and we believe that an important way to make that happen is by educating providers. During the broadcast, you'll hear how the DAVE will support CMS's educational efforts. So you see there are several program areas in CMS that use MDS data. While our uses differ, we all rely on the accuracy of the data. We hope that through the DAVE project, together we can improve the accuracy of the assessment data and the quality of care in nursing homes and also ensure that Medicare payment for those services is appropriate. Thanks, Deb. And now that we've been officially welcome to today's program, let's get started with our first presentation from Bob Goldrick, CMSO Liaison for the DAVE project, Computer Sciences Corporation. Bob, tell us about the famous DAVE and his role on the DAVE team. Well, Doris, I'm responsible for communications and liaison. I'll be explaining the objectives of the project and provide a brief list of terms that you will hear throughout the broadcast. We'll be saying MDS quite a few times. And of course, we all know that the MDS, or minimum dataset, is an assessment of information that's collected for residents of nursing homes and recipients of care at skilled nursing facilities. Our project started with a beta test, a first phase in which we conducted reviews in just two states to get the information and experience we needed to expand to additional states. To begin the process, we used computer programs or data protocols to analyze national assessment data and Medicare claims. That enabled us to select providers and residents for review. I should point out that these operations did not necessarily mean that anything was wrong with the data, just that the assessment information warranted further review. Our nurse reviewers conduct both off-site reviews at our facilities in Hanover, Maryland, and on-site reviews at the nursing homes and skilled nursing facilities. Later in this broadcast, you'll be hearing terms such as retrospective reviews, which are comparing the medical record to the assessment. Two-staged reviews, basically a reassessment of a resident, which includes a reconciliation process with the facility's staff. And the term discrepancies, a difference between the information the facility entered on the MDS and the day reviewers findings. Now that we have a few terms under our belt, I'd like to go into more detail as to our main objectives for the day project. Our primary objective is to assess and improve the accuracy of assessment information completed by nursing homes and later in the project, home health agencies. That one overriding goal is the basis for all our other activities. Those activities include reviewing assessment information and assessment practices, determining the causes of inaccurate data, and making recommendations to improve the overall assessment process. The day project also supports important CMS responsibilities and initiatives, ensuring program integrity, that's accurate Medicare payments, safeguarding the health and safety of residents and patients, improving the quality of care, and evaluating and refining federal policies. The day team does not function as auditors or inspectors. But if in the course of our activities, we should notice something that requires the attention of fiscal intermediaries or state agencies, we would refer it to those entities. We recognize that education is the most important remedy for inaccuracies that can occur because of confusion over policies, turnover in staff, and other factors, and the day project includes a significant educational and training component. Another key objective is to optimize coordination and minimize the burden on all parties involved in the assessment process. CMS has placed great emphasis on our ensuring effective coordination with the states, intermediaries, and other organizations, on our avoiding duplicative activities and minimizing the burden on health care providers. We have heard CMS clearly and planned our communication mechanisms accordingly. I'm sure that keeps you busy, Bob, but tell us about the rest of the day team and what skill sets does it take to accomplish these objectives? Sure, Darius, our team consists of a variety of different disciplines. There are information technology professionals who set up the databases and the systems we need. Then data analysts and statisticians review assessments and Medicare claims, and point on nurse reviewers in the appropriate direction. They also analyze the results of the clinical reviews to determine where inaccuracies occur most often and what the impact is. Our reviewers are all registered nurses, and they go through a rigorous training and certification process. They also have ready access to experts in areas such as physical therapy, occupational and speech therapy, Medicare reimbursement principles, and other areas through the Dave Technical Experts panel. Now, I know we'll soon be hearing from a number of those people, Bob, but perhaps you could give us a summary of what's been accomplished to date. Certainly, Darius. Our accomplishments to date include completing a beta test in the states of Georgia and Indiana between May and December of last year, which served as a proof of concept for our analytical and clinical processes. During that period, we also established the necessary communication processes to ensure that state agencies, fiscal intermediaries, and CMS regional offices were aware of our provider reviews before they took place. After the beta test was completed, we analyzed the results of those reviews, and you'll be hearing a lot more about that in a few minutes. Our next step was to extend the review process to four additional states, Florida, Pennsylvania, Texas, and Washington. We sometimes refer to these as transition states because there are a bridge between the beta test and a national review process. As of today, we're continuing to conduct and analyze the results of reviews in those states. In addition, we have begun to pilot something that's intended to help nursing homes in their internal quality assurance process. It's called the Pair of Assessment Accuracy Report, or PAR. And it's the first stage in something we call provider feedback reporting. The basic idea is to enable all facilities, not just those we visit, to have access to information about potential discrepancies in their assessment data. We'll be covering that in the second half of this presentation. Finally, we are completing our proposal for expanding to a national review process, and we're developing plans for initiating home health assessment reviews. CMS will, of course, review and approve our proposals and plans prior to implementation. People will be hearing more about that in the months ahead. Yeah, coming soon to a theater near you, Ava. Sounds like the DAPE team is putting a lot of thought into the process. Absolutely, Doris. Okay. Well, there's still a lot of information, important information to cover. So let's turn now to Steve Hines, the chief statistician of the DAPE team. Analytic activities play an important role in the DAPE contract. We've been tasked by CMS with answering questions that are important to upcoming program policy decisions. And we also perform analyses to guide and evaluate the review activities that Michelle and Catherine will describe later. In the time that I have, I'm going to first describe the resources available to the DAPE analytic team. Then I'll discuss each of the three major analytic tasks we're responsible for. By the end of my segment, you should have a good understanding of some of the things that we've been learning so far on DAPE, as well as what to expect in the future from DAPE analytic activity. The first analytic handout summarizes resources available to the DAPE analytic team. Even though we've got a very talented analytic team working on this project, we're still dependent on people and data sources to be successful. Several groups of people are very important to us. One of those is the day of technical expert panel. We refer to them as the TEP. It consists of nationally recognized experts in clinical issues affecting SNF and home health care. It also includes people who have played a large part in developing the MDS and Oasis tools. When we wonder why these tools work as they do, we've got people who can give us the histories behind any item we have a question about. Beyond these groups, the TEP also includes people with strong analytic backgrounds who have been very helpful in developing the analytic plans needed to answer our questions. The TEP also includes representatives of contractors who have developed data sources that we use and approaches to identifying facilities that may have a higher rate of MDS discrepancies than most other providers. In addition to our technical expert panel, the DAPE team has also worked with an MDS analytic work group. It includes representatives from CMS who oversee the DAPE project, as well as a number of members of the TEP. It also has representatives from fiscal intermediaries, state agencies, and the CMS regional offices. While work group participants are interested in what we're learning through our analyses, they also have been extremely helpful in identifying possible explanations for the things that we're observing. While we work closely with the TEP, the work group and others to define and interpret our research, the other major resources available to us are some remarkable sources of data. The DAPE team has access to a remarkable data set called the SNF-Stay file, which was developed under the Datapro contract. This file puts together claims and assessments that correspond to an individual's stay. By linking these sources of information together, it becomes much easier to see how individuals change during their SNF stay and to examine how long these stays last and how much they cost. As well as the SNF-Stay file, the DAPE team also has full access to the past 18 months of national claims data and the last three years' worth of all MDS assessments in the national repository. Having national claims data allows us to examine broader trends than the fiscal intermediaries can since they're limited to the data for their providers. And because we have all assessments, we can examine trends related to the long-term care SNF population that is primarily paid for through Medicaid. The first handout also indicates that we have access to other information from CMS, including provider information in Oscar, deficiency information, and other data that supports our analytic work. Now that I've overview the resources that support our analytic work, I'd like to overview our current status on the three major analytic activities we're responsible for. These activities are analyzing patterns and trends related to SNF care, supporting the operational activities of the project, things like drawing samples that we request medical records from, and evaluating the results of our operational activities. Let's start with an overview of the few of the many analyses we've performed to look at patterns and trends. One trend we've been monitoring relates to which rug categories are being used most frequently at the start of Medicare paid stays. The second handout shows the prevalence of rugs across time. As you can see, the RH category is by far the most common rug that residents are placed in on their five-day assessment. If you look closely at the graph, you can also see that there was a slight dip in RH assessments and a slight increase in RM assessments in the period between March of 2000 and March of 2001. As many of you know, that period was one where there was a 20% add-on for the RM category, and it appears that this add-on resulted in a shift from the RH to the RM category, which ended when the 20% add-on went away. Since March of 2001, the percentage of five-day assessments in the RH category has continued to increase. Beyond understanding how providers respond to changes in the payment system, tracking the prevalence of rug use is helpful to us as we prioritize focus areas for future work. After we examine an overall trend, like the prevalence of rug category usage, we perform follow-up analyses to see whether the trend is affected by geography, facility, or resident characteristics. One variable that consistently affects analyses is facility type. As you know, hospital-based SNFs are quite different from freestanding SNFs. You also may know that the percentage of Medicare SNF care received in hospital-based units has been dropping substantially since PPS payment was introduced. The next handout shows the percentage of assessments in each of the rehab categories in 1999 and 2001. The results are separated out so you can see hospital-based and freestanding unit trends separately. In the hospital-based units, you can see that the percentage of assessments in the RH category has dropped some, while the percentage in the RM category has increased. On the other hand, freestanding units have had an increase in the use of the RH category, largely due to drops in the RU and RV groups. So many things are different between hospital-based and freestanding units. The types of residents in them differ. Nurse staffing ratios and salaries are also different. And there are very large differences in the average length of stay. As a result, when we develop approaches for identifying facilities for review activity, we're looking at these provider groups separately. Another trend that's of considerable interest to CMS is the placement of residents in the RH or RM category on their five-day assessment based on order therapy. You may have seen the GAO report that called attention to the increased reliance on ordered therapy. We looked at this trend beginning in 1999 through the first quarter of 2002. As your fourth handout shows, in the RH category, the percentage of five-day assessments where the resident had actually received enough therapy to meet the requirements for RH has dropped. At the start of 1999, about 35% of residents in the RH category on their five-day assessment had received RH levels of therapy at the time the assessment was completed. By the end of 2001, that percentage had dropped to under 25%. Not surprisingly, unrealized therapy was most common in the RHC group since that group has the lowest level of physical functioning. We saw a very similar pattern in the RM group. And we've also looked to see whether residents who remain in the SNF long enough to get a 14-day assessment are actually getting the therapy that was used to justify putting them in their initial rehab category. That percentage also is dropping. We know that there are many legitimate reasons why some residents don't receive the therapy that they were originally projected to require. But CMS has asked us to continue monitoring this trend. And we are looking at records from some facilities where almost no one that is put in the RH or RM categories ever gets the therapy that was ordered for them. As well as looking at trends over time, we're also looking at patterns of care across the country. The fifth handout shows the average length of stay for Medicare-paid SNF stays in calendar year 2000. As you can see, there are some states with average stays that are twice as long as other states. The West Coast states as a group tend to have the shortest lengths of stay. There are obviously many factors affecting length of stay, ranging from the resident population to access to home health services and Medicaid coverage policies to name a few. But we're continuing to do analyses to better understand the reasons for what are very large differences in length of stay across the country. Several facility characteristics we've examined are clearly related to length of stay. As the chart on the left side of your next handout shows, freestanding SNFs tend to have lengths of stay that are about double those in hospital-based units. A pattern that's consistent across different rug categories. In the chart on the right hand side, you see that the for-profit units also tend to have higher lengths of stay across rug categories. In itself, longer lengths of stay aren't either good or bad. In fact, one factor that will clearly affect length of stay is where residents go after their post-acute SNF stay ends. Your seventh slide provides information about the percentage of Medicare-paid SNF stays that end with the resident remaining in a SNF. The percentages vary regionally. The West Coast has among the lowest percentages, while the Northeast has some of the highest rates. If you'll recall, the West Coast also had the shortest lengths of stay. The combination of long lengths of stay with higher rates of discharge to a long-term care facility make it fairly clear that what we're seeing is large regional differences in where most SNF residents were located prior to their post-acute stay. It appears that many more Medicare-paid SNF stays in the Northeast were for persons who were already nursing home residents. Although many things affect the facilities and residents lengths of stay, one of the things that we're doing is looking to see whether residents with longer lengths of stay tended to have more days denied due to lack of medical necessity. Once we have that information, we'll be in better position to identify some long lengths of stay that may require more attention. Hopefully, this gives you a taste for what we're currently doing to better understand patterns and trends in SNF care. While this was the major analytic focus at the start of the contract, it's now only one of many analytic tasks we're involved in. The second analytic task that I'll discuss now is how we've selected facilities and residents for medical record review. There are about 17,000 SNFs in the US, which means that it will be a while before we've involved all of them in day of review activities. But this also means that there's a need to pick the facilities that we will either request records from for review offsite or will select for an onsite review. As you've heard, some day of review activities are done offsite, while others occur in the facilities themselves. Since the approaches for selecting facilities is different for offsite and onsite review, I'll explain how we're handling selection for each review type. Offsite reviews are limited to stays that are paid for by Medicare. For the offsite reviews, we're currently selecting records using one of two general approaches. One approach is to randomly select stays for review. So our current plans call for at least 150 randomly selected stays in each state where we're performing reviews. As well as randomly selecting cases, we've also used a variety of targeting protocols to pick assessments that we believe may be more likely to contain discrepancies. One of the protocols we're using looks at logical inconsistencies between pairs of five and 14-day assessments. For example, if a five-day assessment indicates the resident had pneumonia, it's unlikely that the person would still have pneumonia on their 14-day assessment. We picked some facilities for offsite review if they had a higher percentage of these kinds of inconsistencies. Once these facilities were picked, we picked specific medical records that contained these inconsistencies. As well as these logical inconsistencies, we also selected some records for review because they were extremely long stays in the RU or RV rehabilitation categories. Fewer than 1% of RU and RV assessments remain in those categories for over 80 days. A third approach to targeting is based on whether residents within a facility tend to show improvement in functioning between their five and 14-day assessments. There are certainly many cases where a resident's level of functioning may worsen, but it's quite rare for the average ADL score across all residents within a facility to show no improvement or even decline between five and 14 days. We're reviewing records from some facilities that show this pattern to see whether there are more discrepancies on ADL items or whether there's a higher prevalence of health and safety concerns. A final targeting approach that's being used at present examines records from facilities that have a high percentage of RH and RM assessments where the projected therapy was never realized. We've requested stays from these facilities where it doesn't appear that the ordered therapy was provided. We want to learn from these reviews whether these stays are more likely to have discrepancies and other potential concerns. By reviewing the records, we'll also be able to learn how much therapy persons that are discharged prior to a 14-day assessment receive while in the facility. Beyond these protocols that we're currently using on an exploratory basis, we've also started to use actual review results to develop new approaches to targeting facilities and records for review. Looking for MDS responses that are associated with more rug changes may be a very effective way to develop protocols that are very accurate and efficient. So far, we've used non-random ways to select facilities for onsite review. One of the two general approaches that we've used was selection based on the same targeting protocols that I've just described for offsite review. But after the offsite reviews are completed, we're also using those results to select facilities for onsite review. After the assessments from a group of facilities have been reviewed offsite, we identify the facilities with the highest discrepancy rates on MDS items. The most common rug change rates and the highest rates of potential health and safety concerns. We also note facilities whose medical records lack documentation required to support medical necessity and other issues required for Medicare reimbursement. Providers doing most poorly on any of those dimensions are selected for onsite review. Beyond these two general approaches, I should mention that some facilities have failed to submit records requested for offsite review. Facilities who have done this are quite likely to be chosen for onsite review as well. While there may be some future changes in the details of how facilities are selected for offsite and onsite review, this should give you a sense for what we're currently doing and the logic we expect to continue to use as the project becomes national. One of the important things to stress is that selection for either offsite or onsite day of review does not mean that a facility has done anything inappropriate. If we don't review records from good facilities, we'll never know how much more accurate they are than facilities that may have lots of MDS inaccuracies. The last major analytic activity we're involved in entails looking at the results of our reviews to see what we're learning. This activity is still in progress, but we'll give you a sense for some of the general conclusions that we're reaching. Your eighth analytic handout lists the 15 MDS items with the highest discrepancy rates observed in offsite review. As you can see, items in sections P and O were among those with the highest discrepancy rates. Keep in mind that these discrepancies were for post-acute assessments in just two states. But if you were interested in picking some items for more careful scrutiny, these would certainly be worth looking at. Onsite reviews were of two different types. The onsite retrospective reviews use the procedure quite similar to the offsite review procedure. However, most of the onsite reviews were for long-term care residents. And during the onsite retrospective reviews, it was possible to track down documentation that the facility didn't provide initially. The next handout lists common discrepancies in onsite retrospective reviews of Medicaid residents. While there's a good deal of overlap, it's clear that section G has some of the items where discrepancies are most common. The other type of onsite review involved a reassessment of a resident that the facility had assessed within the past 14 days. Because these reviews actually were based on observation of the resident, it's not surprising that the list of high discrepancy items is somewhat different. One of the things you can see on your final handout is that the two pain items are ones with a high discrepancy prevalence. That's an important finding because it suggests that pain-related issues may only be detected through reassessments of residents in an onsite review. Many analyses are still being performed to better understand the types of residents, assessments, and facility characteristics that may be associated with particular discrepancies that are being observed in offsite and onsite review. We'll report on those in the future, but this overview should give you a general understanding of what we've learned so far and what we're still trying to find out. Thanks, Steve. And next, we're going to hear from Michelle McDonnell, the Onsite Clinical Review Director from the Joint Commission Resources. Michelle? Thank you, Doris. The purpose of the clinical review activity is to verify the accuracy of the MDS assessment data, capture potential health and safety concerns, and identify payment policy vulnerabilities. Clinical review activities occur in two different locations, offsite in Hanover, Maryland, and onsite at a nursing home. As Bob stated earlier, the clinical review activities include two types of reviews, a retrospective medical record review and two-stage verification. Both offsite and onsite teams perform the retrospective reviews. Only the onsite team performs a two-stage review. We will discuss each one of these reviews in more detail throughout the program. All right, what are the characteristics and qualifications of the Dave Clinical Review Team? The Clinical Review Team consists of registered nurses, all of whom have extensive long-term care experience, either as a director of nursing home administrator or an MDS coordinator. The team is highly skilled and proficient in all aspects of the MDS. In addition, the clinical team has access to a panel of technical experts. You may remember Steve referring to them as TEP members. These TEP members include physical therapists, occupational therapists, and speech therapists. These experts are available to ask questions or clarify an issue that the team may have at any given time. Every reviewer must receive and successfully complete a CMS-approved training program. This stringent training program includes MDS coding, claims review, Medicare regulations, coverage guidelines, and long-term care knowledge. So we now know the makeup of the Clinical Review Team. How does the facility get selected and then what's involved with the medical record request? Well, to answer those questions, here is a video of Catherine Shulke, director of the Offsite Review Team, who will walk us through the management and medical record request process. The management of medical records is the cornerstone of the Dave Offsite Review Project. Accuracy, accountability, and confidentiality are the team's guiding principles concerning the medical records we receive. The first step in the record request process is the selection of facilities and residents for review. This selection list is generated by a team of analysts located in Hanover, Maryland. The list is transferred directly into the Dave Medical Records Tracking System, which is password protected and accessible to Dave team members only. Upon receipt of the list, the medical records technicians begin the process of requesting records from the selected facilities. The medical record technicians compile a request package for each facility that consists of a request letter from CMS and the Offsite Review Manager, a composite list of residents for whom records are requested, including the HIC number and the claim period, and a bar-coded tracking sheet for each record. We also send a list of exactly what documents need to be included from the medical record. The request packages are sent via FedEx priority overnight delivery with the stipulation that somebody on the receiving end sign for the package before it can be delivered. The facility has 30 calendar days, not business days, from the receipt of our request to submit the records. The medical record technicians prepare for the receipt of records by designating shelf space for new arrivals and creating a file folder for each facility. All documentation of our contact with the facility is stored in this folder. Keeping this documentation helps us to be accountable for precisely what we have requested and when. These facility folders are stored in alphabetical order in our secure medical records room. The last step in preparation for the arrival of the records is to update the medical records report with the new facility names and the date the requests were sent. The medical records report is a detailed Excel spreadsheet that shows its viewers at a glance the status of each facility in the process of receipt, verification, and review. It is updated daily by the medical records technicians and reviewed weekly by the Dave Administrative Coordinator and the Offsite Review Manager. As signature and delivery confirmation emails arrive from FedEx, the medical records report spreadsheet is updated accordingly to show which facilities have met the 30-day deadline and which have not. Records not received at the end of 30 days are reported to the Offsite Review Manager. She calls the facility to find out why records have not been sent. Upon arrival at the Dave Hanover location medical records are processed in a manner that ensures confidentiality and security. Packages from facilities responding to our request letters arrive at the front desk. They are delivered unopened to the medical records room. This room is locked at all times and its doors can only be opened by key card. Admittance is restricted to medical records technicians, review staff, and authorized Dave team members. Even amongst the small number of authorized personnel, access is limited. Only medical records staff are allowed in the shelving area of stored medical records. All others must request specific records from the technicians. In the secure environment, the newly received packages are immediately stamped with the date of arrival. Records are received from the facilities are verified for all requested documentation. To increase the efficiency of the review work, the medical records technicians always organize the records in the same order. If there is no missing documentation, the medical record technician files the record on the shelf in numerical order. If a record is found to be missing documentation, the specific missing documentation is noted on the daily verification log. The record is entered into the tracking system as missing documentation. The medical records are now complete, organized, and ready for review. The review staff retrieves the records from the room and logs out each record on the medical record log. The reviewer takes the record to his or her desk and begins the review process. When the review is completed, it is returned to the records room and logged in on the medical records log. Medical records technicians enter the data from the medical record log into the tracking system throughout the day. In this way, the medical record staff can accurately account for every record all the time. The medical records report is also updated throughout the day, enabling the Dave team to constantly track its steady progress towards the completion of all reviews. Well, that was an interesting look into the management medical record request process. Michelle, tell us how you complete a retrospective medical record review. Well, remember we stated earlier that this review approach is used by both the offsite and onsite review teams, and its purpose is to evaluate MDS assessments for data accuracy as it relates to payment, quality of care, and the protection of health and safety of beneficiaries in the long-term care facilities. This process consists of a medical record review that identifies discrepancies between the MDS item responses and documentation found in the medical record. But how does the offsite team gain access to a facility's MDS assessment? Well, all the assessments that a facility submits monthly are stored in a national repository. Those assessments are then imported into a data entry software tool called Dave QC. Okay, this is the first time I've heard of the Dave QC tool. What exactly is this tool? Dave QC is the primary data collection tool used by all-day review staff. The team reviews the facility's MDS and looks for discrepancies in each of the MDS sections, utilizing the medical record and other clinical documentation. If the documentation in the medical record does not support the MDS coding, the reviewers use their clinical judgment to determine if the discrepancy was reasonable, considering the resident's condition, diagnosis and treatment. If the reviewers identify discrepancy, they enter this information into Dave QC, documenting the date and location of where the information can be found. All right, can the on-site team do the same thing? No, unfortunately the on-site team has to wait until they arrive at the facility and manually input a facility's MDS assessment. Once we input the facility's MDS assessment data, we follow the same process as the offsite review team. So is that the only difference between the off and on-site retrospective review? Just about. The only other difference is the offsite team reviews Medicare residents only, while the on-site team reviews both Medicare and non-Medicare residents. Other than that, this concludes the retrospective medical record review process for both off and on-site. And then thanks for the description of the retrospective review. If you would, tell us now about the two-stage verification. Sure. In addition to the retrospective medical record review, the on-site review team conducts the two-stage verification review. This review is a comprehensive assessment that includes verifying the MDS data previously captured by the provider. In completing the review, the Dave clinicians perform their own independent MDS assessment on a resident who's had an assessment completed by the facility within 14 days of the team coming on site. The 14-day timeframe is critical to minimize potential differences in the facility and review team observations that could be attributable to changes in the status of a resident requiring skilled care. In order to complete this review, the team will observe and interview residents, interview staff, and perform a medical record review. The review team does not look or refer to the facility's MDS until they have data entered their completed independent MDS assessment. Once the review team has the facility's MDS and their own independent MDS assessment entered in Dave QC, they are now ready to reconcile. This is where the discrepancies between the facility MDS coding and the review team are compared and discussed with one another. The reconciliation process provides an opportunity for the review team and the facility to discuss their reasons or rationale for coding a particular item. It is here that the review team can provide education and clarification to any misconceptions or misunderstandings that the facility may have concerning a particular MDS item, therefore, improving their data accuracy. A simulated reconciliation process will be demonstrated later in our broadcast. Well, Michelle, how long does the review team stay on site in order to complete the two-staged and retrospective reviews? The onsite visit is for three days and consists of two-day reviewers. The team arrives at the facility anywhere from 8.30 a.m. to 8.45 a.m. on day one and proceeds to an entrance conference. An exit conference is held on day three, which marks the end of the onsite review. During the exit conference, the review team shares their preliminary findings with the facility staff. Well, is the facility notified prior to the onsite day team visit? Yes, all facilities receive notification three to four days prior to an onsite visit. A facility information packet is sent overnight via Federal Express, which includes introductory letter from CMS, a letter outlining the onsite process, an onsite agenda for the three days, information request checklist, and a day of information sheet. The onsite team lead will also follow up with the facility by telephone to verify receipt of the packet and answer any questions the facility may have concerning the upcoming visit. Michelle, there's been a lot of really good information. If you would, though, summarize quickly the benefits on both the onsite and the offsite reviews. Sure. Both review processes have unique characteristics. The onsite review process provides the following unique benefits. Enables the review team to interview facility staff regarding assessment practices within the facility and potential causes of data differences. Includes the reconciliation process with the review team and the facility staff compare the results of their independent assessments. This serves as a powerful educational tool for increasing the facility staff's understanding of CMS policies and effective assessment practices. The onsite team provides the only opportunity to gather clinical review information for the Medicaid population and pay resources other than Medicare. This includes an entrance conference that provides the explanation of the review process as well as the outline for the visit. Completion of an exit conference, this conference affords the facility an opportunity to hear preliminary findings and identify ways to improve their assessment practices. Enables the review team to observe facility staff and residents and gather information that may not be documented or ascertainable in a retrospective offsite review. For example, inaccurate assessment of staging of wounds. Last but not least provides first-hand opportunity to observe and gather data about facility billing practices, MDS software problems, and staffing and turnover trends. Unique benefits of the offsite review are a review of Medicare claims data and can detect discrepancies between billing data and assessment data. In the case of onsite reviews, the assessment is reviewed before the Medicare claim is submitted and usually before the assessment itself has been sent to the MDS repository, preventing the opportunity to review the submitted claim. Identifies discrepancies between the assessment and the supporting medical record documentation, provides specific information fiscal intermediaries can use to ensure the accurate payment of claims and enables a much higher volume of assessment reviews than the onsite review in the same period of time. Thanks a lot, Michelle. I look forward to hearing more about that fascinating Dave QC tool and how it's going to bring all the reviews together. And that's going to conclude the first half of our program. Now it's time for you, the viewing audience, to ask our experts questions on what you've heard thus far. To call in, you should call us at 1-800-953-2233. If you'd like to fax in your questions, the number to fax is 410-786-0123. And our panel for this live Q&A session is going to consist of Steve Hines, Katherine Schulke, Heidi Gelzer, Jill Nicolaisen, and Michelle McDonald. And while we're waiting for our first call, Jill, I've got a question for you. Where can we learn more about the Dave project? Well, Doris, that's a question that we get a lot. The good news is that we now have a website that has gone up just this week. Right now, there's just some basic information about Dave, but we'll be posting more information soon. The website is www.cms.hhs.gov slash providers slash PSC slash Dave. All right, now you've told us where we can get general information. When will providers hear about the results of their Dave reviews? We know that providers want to hear the results of their reviews and we really appreciate their patience. The initial phase of the Dave project has been one of data collection and analysis and protocol development in order to refine the Dave approach. We believe that feedback to providers is a critical component of our educational initiatives and have put quite a bit of effort into developing and testing reports that will give meaningful information. We have recently piloted the reports with a few providers and we're incorporating their feedback. We will send the remaining reports out from both the on-site and off-site reviews before we move to the next phase of the Dave activities, which is the implementation of a national program this fall. All right, and again, folks, this is the time where you get a chance to ask our panel of experts questions. We invite you to call the number, which is 1-800-953-2233. Again, if you're too shy to call, you'd like to fax us. Dial 1-410-786-0123. And I have a little stack of questions here. Steve, let me ask you. You gave a lot of analytic information about the Dave beta phase. What's been your focus during the transition phase? Well, we're continuing to analyze national data, looking at patterns and trends in claims data and assessment data. Beyond those things, our analytic activities are supporting the medical record requests that we're sending out in the four transitional states. We're also starting to look at the results from the four transitional states, but it's probably gonna be several months before we've got a complete picture of what's going on there. Beyond those things, we're continuing to develop and refine and test our targeting protocols for use in the national rollout and to do other things that are going to support the development and the improvement of the strategy that'll be used in the national rollout. Okay, thanks, Steve. Catherine, let me come to you. Your presentation outlined the off-site medical review process. If you would tell us a little bit more about the information that you send to facilities when you request medical records. Sure. During the beta test period, we worked very closely with our providers to ensure that we could answer any questions that they had in regards to the packet that they had received. We then used their answers and their questions to revise and enhance our process for the next phase of record requests and have been able to improve our processes as we move forward. All right, thank you very much. We have a telephone call on the line. We have Laurie calling from Montana. Thank you for calling, Laurie. Please go ahead with your question. I have two questions. One would be, are there any negative repercussions for any discrepancies that may be found in the process of the reviews? And two, can facilities be proactive and invite an on-site review? Okay, who'd like to take that? Well, I think I will. If you don't mind, can you repeat the first question? She wanted to know if there were any negative repercussions for, I'm sorry, Laurie, you said negative repercussions for... Were there any negative repercussions for any of the discrepancies that may be found? Well, in terms of the discrepancies that are found, as Michelle's talked about, the on-site process, as part of the on-site review process, there is a reconciliation process where the Dave reviewer and the facility staff discuss differences in their items that are found, and they go back and forth and discuss that and learn more about the process. The Dave reviewer can find about additional information that maybe they didn't have access to. So we have, in the on-site process, an opportunity for back and forth between the facility staff and the Dave reviewers. And I think in terms of that, we expect the facilities to use that information to incorporate it into their ongoing educational and quality audit practices. So it's a way to gain more information about the assessment process and learn about the tools and resources that are available to them. So in terms of that, there's a lot of information that can be shared between the reviewer staff and the facility staff. So that's really the focus, and it's really to share information. Okay, and your second question, Lori, was, can facilities be proactive? Yes, and invite an on-site review. Well, I think that's a welcome comment and question. I think we would certainly like that to be a part of our process. However, at this point right now, we're selecting the facilities where we're going to go, but I think in terms of that, as part of our efforts to get more information out on the website and more about what the Dave's doing, we're hoping that all providers will be able to benefit from this process. I think we'd like to hear that providers would like to have a Dave reviewer come and that it's a positive experience, and hopefully we can take that into consideration and move in that direction. All right, Lori, thank you so much for your question. And again, we invite you all who are watching to give us a call and faxes, if you so desire. The numbers are on your screen. Michelle, let me address this next question to you. The questioner says, I've got a question about the on-site review process. Once an on-site visit has been scheduled, can the provider request the visit be rescheduled? Well, what happens at that point is, as I stated earlier, the facility is notified prior to our visit, and we do make a phone call to them prior to coming with the team lead. And during that timeframe, if there are some extenuating circumstances, we try very hard to work with the facilities. However, today, we have not had a problem where we had to reschedule with the provider. Once those questions came up to us, we were able to answer those, and we were able to always go into the facility, but we do try to work very closely with the facilities. Great, and I have another question here. Heidi, this question is for you. It says, is there any correlation between the Dave on-site visit and the state agency surveys? Will there be a follow-up visit by the Dave on-site after their first visit? That's a good question, Doris, and one that we get quite often when we're telling people about the Dave project. As Bob mentioned earlier, and as I think both Steve Pellevitz and Deb Taylor talked about, the Dave contractor and the Dave review process works in conjunction and works to support the ongoing efforts of the state agencies and the FIs, so that in terms of the state agency operations, the state agency still conducts all of the surveys and inspections in order to evaluate a facility's compliance with the conditions of participation. That task and that function has not changed. However, as part of the Dave process, we now have additional information that the Dave reviewers can share with the state agencies so that together they can incorporate this information into their respective processes. So the state survey agency gains information from the Dave as well as the state survey agencies sharing information with the Dave team as appropriate. We've also, as part of the Dave process, established work groups with the state agencies and the FIs to help support this process so that we can create an integrated and coordinated approach. Okay, we have a fax here. This one's from Dana. Dana says, how soon will the Dave review process come to home care agencies? Jill? I can take that. We anticipate probably sometime early 2004. Okay, short and sweet to the point. Okay, we have another telephone call. We have Judy calling from Virginia. Thank you for calling, Judy. Please go ahead. Okay, yes, I was wondering how the Dave audits we're gonna interface with the local Medicaid auditors in the K-SNCC states. We currently have Medicaid ongoing audits. Is there gonna be any interface between the two projects? Heidi? I'll take that. Again, another question that we get often and it's another opportunity, I think, to share with you how we are trying to coordinate between Medicare and Medicaid. When we undertook this process to start this data verification contract, we realized that just within CMS in terms of just the Medicare programs that a lot of the programs were using the MDS and we needed to have a way to avoid unnecessary duplication and audit activity in terms of payment reviews and quality of care reviews, et cetera. So that we initiated the Dave contract to have a coordinated and centralized approach. We clearly recognized and acknowledged that there were states that had their own audit programs to support the Medicaid payment system that Steve Pellevitz referenced earlier. And basically what we're doing, like we're doing with the state agencies and the FIs is working together with the state Medicaid agencies to learn more about their audit programs so that, again, we can coordinate our efforts, share information to support the programs, and learn more about each other's activities so we can work together and not duplicate. Again, just as with the state agencies' functions and the FI functions, the state Medicaid agency still maintains and manages that program. So, again, the Dave is just a supplement and a support to their ongoing efforts. Thank you so much for your question. I've got one other question here. Jill, let me address this question to you. Have you taken any money back from providers as a result of the reviews you've done, and will you be doing so? Well, Dara, as I mentioned earlier, the initial phase of the project has been a developmental one. During this time, we've collected a lot of data, tested review strategies and protocols, and we're conducting analyses of our findings and results. At this point, we have not finalized our analysis of the financial impact, and we're still considering our options. Once we implement a national program, yes, we will make payment adjustments based on the Dave reviews, and that will include underpayments that have been identified as well as overpayments, and I also get that question a lot. Okay, I'm not surprised. We'd like to say thank you all for your questions. We're gonna continue with the second part of our program, and now in the next half of the broadcast, you're going to see and hear presentations on the Dave QC tool, the PAR Report, communication with providers, state agencies, and fiscal intermediaries, and hints to providers on how to improve the process. So with all that said, let's take a look at a video dramatization regarding the Dave QC tool. Okay, Susan, this is the part of the review process where we have an informal discussion to discover the rationale for the differences between your completed MDS and that completed by myself. The intent of this discussion is to be educational for both of us. I need to know why you coded an item in a particular way, and then I will share with you why I coded an item in a particular way. I want you to feel free to challenge me and to ask questions during this process, okay? Sure. As you can see, I have your completed assessment in Dave QC, and I have my completed assessment in Dave QC. The software application automatically identifies discrepancies between your responses in mind with a red no in the match column. Wow, this is great. Let's look at our first discrepancy. It's E1A. You coded that Mrs. Jones did not have verbal expressions of distress. Yet, the social service note dated April 10th, 2003, states that Mrs. Jones made comments that I wish I were dead. Why did my loving Harold have to die before me? I just wish God would take me to him in heaven. So, based on this information, can you tell me why you assessed that E1A should be coded zero? Oh, that's easy enough. That note was written on April 10th, 2003, and my ARD is May 5th, 2003. So the look-back period is 14 days, and this behavior occurred before the 14-day look-back period, so I can't capture it. So what did you code? Okay, I understand now what happened. Actually, for this item, the look-back period is in the last 30 days. So, since the comments were made during the last 30-day timeframe, this should be coded a one. Let's go to the manual so you can see the guidance for this item. Here we go, page 361, intent to record the difference of indicators observed in the last 30 days, irrespective of the assumed cause of the indicator or behavior. Well, I still think zero is correct because she only said it once, and in order to code one, doesn't she have to exhibit the behavior five days a week? Actually, the coding for one states indicator of this type exhibited up to five days a week. It does not state it needs to occur five times in a row. Let's look at page 3-63. Under the clarifications, second diamond, last sentence, if codes zero or two do not reflect the resident status, but the behavior occurred at least once, use code one. Oh, you know, I read that section many times and just got it in my head that this is a 14-day look-back period item. Wow, now that I see it says the last 30 days and up to five times a week, well, I guess I have some other assessments where I might not have done it correctly. Well, now you see what the intent actually was and you can correct yourself. Now that we have reviewed the manual, do you agree this is a one? Yes, it's a one. Okay, so I will reconcile with the QA value and enter a reason code and I will choose the QA value and the reason code will be that the clinician did not correctly understand the item. I will enter a few notes of where I found my information, the medical record, as well as the dates. Good, let's go to the next item where we would not match. K1A and K1B, I coded these items as none of the above. Can you tell me how you came up with your answers? Well, the dietician, and who she's not here today, documented on her initial assessment that the resident had a chewing and swallowing problem. So I took the information from her records and therefore coded these problems as being present. Can you show me in the medical record where that particular documentation was located? I was not able to interview the dietician today to gather further information since she only works Monday and Thursdays. Well, actually, that assessment's not in the medical record. It's kept in the dietician logbook, which is kept in a hop shelf on the nurse's station, but I have it right here. Oh, I did not realize that. Could you show me that logbook? Sure, look right here. Thank you, you're right. Here it is, I will reconcile to your answer. I was not aware that this was kept separately, thank you. Yay, one for us. Actually, two for you. K1A and K1B, well, this is a very important tool. Okay, so what do you think? Oscar, Emmy, our stars for this video were Dr. Susan Jocelyn, Health Insurance Specialist in the Division of Nursing Homes in CMS, and of course, Michelle McDonald, who we heard from earlier. All right, let's move on to the Paired Assessment Accuracy Report, also known as the PAR Report by Terry Moore, Senior Associate from APT Associates. Let's take a look. The purpose of the Paired Assessment Accuracy Report, or PAR, is to furnish needed information to nursing facilities about the accuracy of their MDS data. This provider feedback mechanism was created in order to empower facilities in the ongoing process of data accuracy improvement. Rather than introduce one more, perhaps onerous level of scrutiny, oversight, or looking over your shoulder, we wanted to give you the necessary tools to examine your own data, find your own issues, if you have any, and approve your own processes for accurately assessing resident status and care needs. We hope that this PAR Report is the first of many types of data feedback reports you'll receive in the future, and we will keep you informed of these as they're developed. The Paired Assessment Accuracy Report is generated using the National MDS data repository. An analytic program reviews one year's worth of data and identifies pairs of sequential MDS assessments that can meet certain trigger conditions. We presently generate reports using five and 14-day MDS pairs and 90-day or quarterly pairs. This means that if the consecutive five-day and 14-day MDS for Mrs. Jones both meet the trigger condition, those two assessments will be flagged and reported on the PAR. It's important to understand that these reports can only report on potential data inaccuracies. Since we're not there to assess the resident or to review the medical record, we can't definitively call these triggered assessments an error. But our research supports that with certain clinical conditions, these triggers are an error about 40% of the time. Let's walk through two examples of trigger conditions that would cause a pair of sequential MDS assessments to be reported on the PAR. The first example is pneumonia. MDS item I2E. This is a common diagnosis seen in the post-acute care population, but a diagnosis that one would expect to be resolved between a five-day and 14-day MDS assessment. The data analytic algorithm looks for sequential five and 14-day assessments that both code yes on pneumonia and reports these. The clinical rationale for this trigger is that one would expect pneumonia to be resolved on the 14-day MDS if it was coded on the five-day MDS, in part because this condition was likely treated in the hospital prior to nursing home admission. We recognize that there are many cases in which pneumonia persists correctly on the 14-day assessment, but facilities with a large number of residents who trigger on this condition may have a problem in the coding of this MDS item. The second example of a trigger condition for the PAR is bladder continence, MDS item H1B. Incontinence is a prevalent condition among chronic care residents and would be expected on sequential assessments for long-stay residents. For this trigger condition, the data analytic algorithm looks for consecutive quarterly assessments that each report cognitively impaired residents to be continent of bladder. So for the subset of facility residents who are cognitively impaired, we look to see who remains continent where H1B is coded zero from one quarterly assessment to the other. The rationale for targeting this condition is that residents who are at greatest risk for developing bladder incontinence are those long-stay residents with cognitive impairment. When we don't see incontinence develop over time, we trigger this condition as an indicator that facilities may have a problem with the assessment or coding of this MDS item. Now again, if your facility has a great process for assessing bladder continence and for the care and treatment of residents at risk of incontinence, your data may be fine. This is simply a marker intended to indicate that a coding problem may exist. Additional triggers were developed in the research phase of this project and we're generating and receiving new ideas for MDS items that could potentially serve as MDS triggers. More on this later in the presentation. The Paired Assessment Accuracy Report as it currently exists has three sections. One section for identifying information and two trigger sections. The identifying front page contains facility identifiers such as name, address and Medicare provider number and dates of assessments that are included in the calculation of the triggers. The MDS trigger pages include information on trigger statistics such as the number of residents with five and 14-day assessments or 90-day paired assessments, the number of resident assessments with trigger conditions, the percent at the facility and the comparison of facility triggers with state and national percentages. The MDS trigger pages also include trigger item definitions and explanation of those trigger conditions and a list of resident assessments triggered by Medicare number, date of birth, MDS assessment reference states or the A3A item and the actual code for the triggered item, for example, item I2E or item H1B. The Pair is envisioned as a periodic report that you can download yourself from keys just as you do your quality measure reports. Once your facility downloads its report, we would expect that you would complete a series of steps to ensure that the MDS assessments listed on your Pair are accurate. These steps would include reviewing the report and selecting some cases such as those most recently completed for review. You would then review the flagged MDS assessments against the medical record to see if the coding of the triggered condition is accurate. In the event that you discover coding errors, you would follow your usual internal procedures for correction and submission of corrected forms and we would hope that you would use the information gleaned from your reviews and your findings to improve facility MDS coding processes. For example, you may wanna do an in-service on the specific trigger condition or develop a quality improvement plan around the assessment of that particular trigger condition that you found to be problematic. At this point, we have no plans to request information from you about these reviews, so please consider these reports as a means to just assist you in data quality improvement. In order to further understand the utility of the trigger conditions and report, we convened a focus group during the summer of 2002. Six representatives of nursing facilities who are actively involved in MDS assessments were nominated by the American Healthcare Association, the American Association of Homes and Services for the Aged, and the American Association of Nurse Assessment Coordinators. The six focus group members received prototype reports via regular mail in advance of our two focus group meetings. We asked each of them to review a sample of medical records for those residents who were reported as having met the trigger conditions in preparation for our meeting. They were provided with an instructional manual that provided detail on how to complete the medical record review. Focus Group's participants provided the project team with valuable feedback about the appearance and content of both the report and the instructional manual. They also had a great deal to say about the various trigger conditions that we had reported on. We incorporated this feedback into a revised reporting format and instructional manual and determined that we would only test two trigger conditions on a larger scale. At this time, we also gave a name to the report, the Paired Assessment Accuracy Report. In order to determine if the entire provider feedback mechanism, including the electronic delivery of the report, would work on a larger scale, we designed and implemented a pilot project this past spring. A total of 28 facilities in the states of Arkansas, Indiana, New Hampshire, Vermont, and Virginia volunteered and completed the six-week pilot project. These facilities were recruited with the assistance of the national offices of ACA and ASA and the state affiliates, as well as the Hope Organization in Indiana. Participants were asked to review their Paired Assessment Accuracy Reports and to examine the medical records for a small number of cases that were listed on their reports to determine if MDS coding was accurate. Most participants reviewed the medical records and were able to provide the project team with feedback about the process and about their findings. Participants found the reports to be easily accessed via CASPER and most found the reports themselves to be helpful. Some reported that the Pair process assisted them not only in identifying MDS coding discrepancies, but in understanding the source of their coding errors so that they could avoid such problems in the future. Now that the Pair has been demonstrated to be a useful tool that's easily accessible by facilities through existing data systems, we plan to provide this report to all nursing facilities in the near future. By this fall, all facilities with access to keys should be able to download a Paired Assessment Accuracy Report for their facility and to access useful information, such as the instructional manual, that can assist in using the Pair to conduct MDS self-audits. There's ongoing work to develop new MDS trigger conditions using the feedback obtained from both the focus group and the pilot project participants who shared many good ideas for potential new triggers. Other types of provider feedback reports are also envisioned over time. Again, we'll keep you informed of the development and availability of such reports as we progress. Thank you for your attention. And we'd like to say thank you to Terry. Next we have Bob Goldrich back with us. His presentation is on communication with providers, state agencies and fiscal intermediaries. Bob? Thank you, Darius. We recognize from the very outset that good communication would be absolutely essential to the success of this project. There are a lot of review activities that affect providers, state reviews, fiscal intermediary reviews of Medicare claims, state Medicaid review activities and others. Our objective was and is to do our job with a minimum of disruption to the provider community, to focus on efficient and effective communication with other reviewing authorities. Toward this end, we have taken the following actions. We established two working groups we consult with in developing and refining our plans and addressing key issues. One is the state agency work group which includes survey staff, RAI coordinators, Medicaid representatives and CMS RO staff. The other is the fiscal intermediary work group which includes a subset or cross-section of FIs who advise us on medical review and benefit integrity activities. We use these groups to address issues, sorry, such as how to coordinate our day review schedule with their processes and how best to share information with one another. Before we select providers for review, before we request medical records or schedule on-site reviews, we interact with the state agencies, FIs and CMS ROs to ensure that our activities won't interfere with theirs. We do all of this in a confidential manner, providing information only to those entities with the proper authority for the providers involved. Similarly, we conduct teleconferences to discuss review findings with the states and intermediaries. Again, I want to emphasize that only the appropriate entities, those with legal jurisdiction for the providers reviewed, participate in these reviews. We have found these exchanges to be very helpful in identifying areas where education can be focused to increase the accuracy of assessment data and assessment practices. All right, thanks, Bob. All right, before we hear from our final speaker, Michelle McDonnell, I'd like to give you all the telephone numbers to call and fax in your questions during our last Q&A portion of the broadcast. The number to call is 1-800-953-2233. If you'd like to fax this, dial 1-410-786-0123. This last presentation of our program focuses on hints that the provider can use to improve their MDS accuracy. Michelle, what are some of the helpful hints that we can share for improving MDS accuracy? Well, first and foremost is to have the most current long-term care resident assessment instrument and user manual, which was updated December 2002 and became effective January 1st, 2003. This revised manual includes updates, expanded clarifications, and new case studies to the processes and clinical items required for the MDS resident assessments that have occurred since the last publication in 1995. The manual is a must for any provider. Now, are there any other resources that can be useful in conjunction with this manual? Absolutely, many times CMS will determine that further clarification of an item is needed. CMS will post new clarifications on the CMS website. If a clarification has been posted on the official CMS website, then it can be considered policy. Provisors should monitor CMS website at www.cms.hhs.gov. For these clarifications, are there specific hints for improvement on particular sections of the MDS? Sure, let's discuss each section of the MDS and share a few of our findings. Sections A, A, A, B, A, C, and A, D, the majority of discrepancies associated with these areas are data entry errors made by the facility. For example, the date of birth has been entered incorrectly or there is an omission or incorrect coding where the resident was admitted. Taking a second look at this section could diminish the number of data entry errors made by the facility. For section A, although the assessment reference date or ARD is identified on the MDS, at times facility staff will either complete the assessment or document findings that occurred before or after the established ARD. In either situation, valuable resident information is being omitted or unable to be captured due to the assessment being completed outside the observation period. Providers need to be sure that all members of the interdisciplinary team are aware of the established ARD and are aware of the specific observation periods for each specific section of the MDS. Not all sections have a seven-day look-back period. Some cover 14 other 30 days, depending upon the MDS section. However, the ARD is the date when all MDS observation periods end. So Michelle, it sounds as though this is a pivotable date and it's got to be taken seriously. Yes, it is. Another item under section A where we have found discrepancies is with advanced directives. Upon admission to the facility, a resident may not have had any advanced directives. However, over the course of their stay, an advanced directive gets completed and it's located in the chart but the MDS does not get updated with that information. With section B, our discrepancies for this section usually are due to the facility coding short-term memory at the resident's highest level versus the most representative level of function. Therefore, a resident with a short-term memory problem, six out of the seven days, should be coded as a one. For section C, the various findings in this section are due to the facilities not recognizing that C1 through 6 have a seven-day look-back period while C7 has a 90-day look-back period. Here's an example where information can be omitted or not captured accurately due to not realizing the correct look-back period. Section D, major discrepancies in this section, is failure to capture visual appliances under D3. This is an MDS item which is found most frequently on two-stage review since we actually observe and interview the resident. During the reconciliation, this gives us an opportunity to point out the omission to the facility clinician. For section E, as demonstrated on our clip of the reconciliation, discrepancies have been identified in this section due to facility clinician not realizing that the mood items can be coded even if it only occurred once during the 30-day observation period. In this particular section, we will also find contradictions between social service notes and the nurses' notes. For example, social workbook code resident has not resisting care, E4E. However, in the nursing progress notes or on the medication administration records, documentation will note that resident was combative with care or resists assistance with ADLs or refused medications or treatments. All nursing documentation would be during the observation period, therefore, should be coded. This is where we will discuss with the facility staff that the coding for this item focuses on the resident's actions, not intent. Okay, so are there other sections of the MDS where we may find contradictory information between different disciplines? Yes, section F is another one of those sections. Again, this is a section which is very often completed by social work, but when reviewing the activity notes, for example, we may find contradictory information for the resident's sense of involvement, relationships, and past roles. All right, then how can a facility avoid these contradictory situations? Communication between interdisciplinary care team is essential, whether it's a short daily team meeting or a more comprehensive weekly team meetings is absolutely necessary for all members to discuss and share information concerning a resident. In section G, which is another section which requires communication, not only for the interdisciplinary team members, but also for team members throughout all three shifts, it is probably the most challenging section of the MDS to complete and is the one section we spend a great deal of time on during reconciliation. Most of our discrepancies identified in section G are due to facilities coding ADL self-performance based on what the facility clinicians feel the resident is capable of doing versus what the resident actually did do. The full seven-day look-back period is not utilized nor are facilities capturing what the resident performed throughout the three shifts per day. A resident's ADL self-performance may vary from day to day and from shift to shift. Therefore, different levels of staffing assistance may be necessary for that resident depending upon the time of day. Many times, the facility is providing more staffing assistance than they don't take credit for it. All right, this MDS tool is fascinating. It seems to take in all the aspects of an individual resident. It truly does. That is why it's so important for the data to be accurate in order to best meet the resident's needs. In section H, the discrepancies are usually due to misunderstanding the definition of a scheduled toileting program. The revised RAI manual has very detailed clarification for when to code this item and what is required to do so. I encourage all facility clinicians to review this section if they have not done so already. We are now up to section I. In this section, we find facilities not coding for disease diagnoses that have a relationship to the current ADL status for cognitive, mood, and behavior status, medical treatments, nursing monitoring, or risk of death. Instead, facilities list inactive diagnoses or admit diagnoses that have a major effect on resident care. For section G, upon review of the medical record documentation, reflects resident has complaints of pain or of a fall, and the MDS does not have these areas captured. This is usually due to the clinician not thoroughly reviewing the medical record prior to the completion of the MDS, and these items are then missed or not captured. Discrepancies associated with section K are many times due to the facility not being aware that chewing and swallowing problems are coded even when interventions have been successfully introduced. They will code this item as none of the above when in fact they should continue coding as a problem. Well, some of these misunderstandings or misinterpretations sound as though they can easily be corrected, is that right? Yes, it's just a matter of reviewing and referring to the RAI manual for clarification. Some sections such as L are very simple items to correct because the issues here are usually lack of coding that the resident has dentures and or removable bridge. Once these areas are shown to the facility, the usual response is I was in a hurry and I didn't take the time to look. Moving on to the next section, section M, the most significant finding in this section was the miscoding of pressure ulcers and staging of wounds. Facility clinicians were uncertain of the specific definitions of how to stage ulcers and how to identify the type of ulcer. When on site, section M provided several educational opportunities for the reviewers to consult with the facility clinician and review the RAI guidelines and the most recent clarifications on skin conditions. When reviewing section N, the coding on the MDS would many times conflict with the information found in the activity notes. This occurred mainly when a non-activity staff member completed the section and the notes were not reviewed prior to completing the MDS. Well, I could see where that could be a problem. For section O, the majority of discrepancies identified here involve miscounting the number of medications and the coding of injections. Facility clinicians required clarification of when a purified protein derivative or PPD could be counted as an injection. Discrepancies noted in section P included miscalculation of therapy minutes. Some examples of those inaccuracies include rounding minutes off to the nearest hundreds, choosing a rug score for a resident and then computing the total minutes of therapy required to attain that rug score instead of deterring the resident's need for therapy. Coding nursing rehabilitation restorative care without meeting the specific RAI guideline criteria. Miscoding restraints and devices. Miscounting physician visits and orders. Facility clinician required clarifications on how and when to count a physician visit and frequently the facility misunderstood when to count change in orders. The tendency was to count each change in an order.