Call us toll-free: 800-878-7828 — Monday - Friday — 8AM - 5PM EST
Article by Marina Kravtsova. This article was originally published on the Journal of AHIMA website on Sep 29, 2017 and is republished here with permission.
Over the past nine months, the clinical documentation improvement (CDI) team that I represent has been recruited to perform so-called second level reviews. All cases presented to the CDI team for second level review thus far have been discharged and billed. Thus, when presenting these tasks to the CDI team, health information management (HIM) professionals often refer to them as “retro reviews.” The goal of these second level reviews is to identify opportunities to improve accuracy of provider documentation to ensure the documentation is clinically appropriate, accurately reflects the severity of illness for the patient, and is reflective of current Centers for Medicare and Medicaid Services standards.
The focus of the reviews has shifted over the months—starting with considering provider documentation from the mortality index standpoint, to focusing on diagnoses and conditions that affect HCC scores, to re-reviewing charts that were sent out for billing without any co-morbid or major co-morbid conditions having been captured. Some of these assignments target specific service lines such as neurology, thoracic surgery, or bone marrow transplants. Other rounds of second level reviews zoom in on specific DRGs, such as being sepsis. Some assignments instruct CDI team members to query providers if opportunities for clarification with impact are detected in the course of these reviews. Other assignments ask to note potential query opportunities but to not query providers. When these assignments are announced to the CDI team, each member receives about five to seven cases to review within three to 10 days. Clinical documentation specialists (CDS) receive instructions not to count these reviews towards productivity but are asked to carve out time outside the productivity calculations to complete these reviews and capture findings in an Excel spreadsheet. These findings are then discussed during weekly CDI meetings.
One might argue that these second level reviews can be considered part of the DRG reconciliation process. However, DRG reconciliation requires collaboration between CDI and coding teams that results in aligning CDI documentation clarification efforts with coding guidelines and perspectives. Because these retro review tasks are assigned to the CDI team only, could these assignments fall within the realm of clinical validation initiatives many CDI teams are engaged in now? Yet, given the post-discharge nature of the charts presented to the team, I wonder if these assignments, as described above, truly meet the definition of clinical validation. I pose this question because some of the assignments have included charts that were previously reviewed by CDS. Could these assignments then be considered internal quality audits of the CDI team? Or, perhaps, be counted towards educational initiatives within the CDI team? Because the findings of these retro reviews are discussed amongst the entire CDI team, a learning opportunity is certainly present. The question then becomes what metrics should these retro assignments be subjected to? Is this the best use of CDI team time? How is the time spent on these retro reviewed being justified? And, most importantly, should these second level reviews be counted towards CDI team productivity?
In a phone interview earlier this year, a CDI Manager at a hospital in Texas shared with me that she was hiring additional CDI staff to allow her to re-assign her seasoned CDI staff to focus on various focus groups of second level reviews. The new staff she was hiring was to continue to perform regular concurrent CDI reviews. She clarified that the newly hired CDI staff was to be allowed to work completely remote and was likely to be hired on a temporary basis to allow her regular staff to catch up with focused patient population second level reviews. The fact that this Texas hospital CDI manager was able to procure additional CDI positions to allow for separate second level reviews being undertaken concurrently with regular CDI team workflows, suggests significant value of these second level reviews to that Texas hospital. How is that value capitalized? What is the review on investment of these second level reviews?
In preparation for this discussion, I also came across a job posting for the position of a CDI Second Level Reviewer to work under the supervision of a CDI Manager and in collaboration with a CDI Educator at a hospital in North Carolina. Responsibilities of the advertised position included secondary clinical chart reviews, resolution of DRG discrepancies, and education to clinical staff regarding opportunities for diagnosis clarification, principal diagnosis accuracy, and improvement of capture of additional co-morbid conditions to include HACs and focused PSI diagnoses. So, again, the nature of the reporting responsibilities of the above job posting, as well as the described responsibilities, suggest that the greatest value of the second level reviews is their educational component. Would you agree? Does your CDI team perform second level reviews? How are they perceived by the team? Are they regarded as a burden or is their educational value readily recognized? Where do these reviews fall within your team’s productivity? Let’s discuss these topics.