Call us toll-free: 800-878-7828 — Monday - Friday — 8AM - 5PM EST
By Howard Rodenberg, MD, MPH, CCDS for ACDIS CDI Blog
Nobody likes being proved wrong, but it’s not always a bad thing. It’s probably okay to be proven wrong when you think it’s a good idea to feed a shark by hand, or to drink a few beers and then tell your buddies you can bite the head off a rattlesnake. And you generally learn something from errors like this, or you die. Really makes you think.
While my CDI mistakes aren’t quite as lethal, they still occur. So props to Teresa Posadas, one of our CDI specialists here at Baptist Health, who called me out the other day. She was reviewing a case of a patient with an acute cerebral vascular accident (CVA) who failed her swallowing study. The speech therapy note indicated the patient had mild to moderate oral dysphagia and mild pharyngeal dysphagia. Teresa queried the physician for concurrence that the patient had oropharyngeal dysphagia related to the CVA.
The question that came across my desk was if this was a reasonable query. Were we were looking for too much specificity in the oropharyngeal dysphagia, and could we get the same impact with just a simple note of dysphagia? I thought perhaps we were asking the doctor for more specificity than was needed. The DRG assignment wouldn’t change, and clinically it doesn’t seem to make much difference, so even though it’s a perfectly valid query maybe it’s one less thing to add to the physician’s workload.
Teresa didn’t give up. She gave me a great article to review that discussed the impact of dysphagia on healthcare costs and length of stay. But more importantly, she pointed out that I had erred when looking at a coding table, and oropharyngeal dysphagia did indeed elevate the case from a severity of illness (SOI) level 1 to an SOI of 2 within the assigned DRG.
Oops.
But the exchange got me thinking about the whole idea of specificity and its limits. For a while, I’ve had this idea percolating in the back of my head that often we do specificity for the sake of specificity. The desire for specificity is really the impetus behind the ever-expanding corpus of ICD-10-CM, and it’s the driving force behind much of coding practice. But as a clinician, I can take almost any page from the ICD-10 codebook and find diagnoses that are so narrow that the term is clinically meaningless. So how much specificity is good, and how much of it is simply spinning wheels?
I think that next time I’m presented with a query like Teresa’s, I’ll be tempering my judgement by asking if the specificity requested fits into one of three categories. The first is what we’ll call fiscal specificity. This is specificity that makes a financial difference. Queries that impact DRG assignment through clarification of a principal diagnosis, establishing the presence of a secondary diagnoses to confirm a CC/MCC, or specificity that is reflected in an increased SOI, fall into this category.
The next category would be regulatory specificity. These are those queries that apply to regulatory mandates. Establishing whether a pressure sore or catheter-associated urinary tract infection (CAUTI) was present on admission or represents a hospital-acquired condition (HAC) are prime examples. Regulatory queries may also apply to readmissions, clarifying if a particular case falls within a readmission category. Differentiating the patient with end-stage renal disease and volume overload from a case of acute heart failure falls into this model. While regulatory queries may have a fiscal impact, their primary goal is to aid in assessing the quality of care.
If a query doesn’t fit into one of the prior categories, it is then evaluated for clinical specificity (perhaps better termed “clinical impact”). This is the most difficult category to evaluate as there are no standards or metrics to guide us. We all know that in many ways the coding system is flawed; things that physicians think make significant impacts upon patient care (such as the presence of paroxysmal atrial fibrillation with the use of anticoagulants or antiplatelet agents) do not “count” as a CC/MCC. Trying to explain this dichotomy to a clinician is difficult. While one might try to review the CMS methodology for determining CC/MCCs with them, most providers have neither the time nor the inclination to learn.
This is where the education and experience of the CDI specialist comes in. When appropriately supported by a physician advisor, there is no better person on the administrative side of medicine to determine what further documentation adds to the description of the case and what simply doesn’t matter. “What matters” is, of course, often a difficult call. As an emergency department (ED) physician, I look at the chart as a note for the next provider to read. If the patient bounces back to the ED tomorrow, will the next doc know what I was thinking and why I did what I did? Anything more is superfluous. If additional specificity is possible but won’t make a difference to the next clinician who reads the chart, it’s hard to justify sticking another query in the provider’s workflow unless we’re simply trying to inflate our numbers.
These three categories still cast a wide net and encompass an expansive range of query opportunities. But perhaps applying this thought process as a test before sending a query for diagnostic specificity may eliminate some of the more granular queries which have no impact outside of productivity metrics. Just because we can be more specific, doesn’t mean we always should.
Public confession complete. Teresa, I stand—well, since I’m typing, I sit—corrected. Thanks.