Call us toll-free: 800-878-7828 — Monday - Friday — 8AM - 5PM EST
By John Moore for SearchHealthIT.com
Health care providers over the years have raised a number of objections to electronic health records — they cost too much, disrupt practices already pressed for time and fail to mesh with the way medical offices work.
But there’s an even more fundamental digital challenge — some doctors don’t want to busy their fingers on a keyboard. Indeed, manual data entry can be a barrier to EHR acceptance. Physicians may well prefer to document patient encounters in the traditional style, dictating notes and using a transcription service.
Against this backdrop, speech recognition technology offers doctors another way to fill out a patient’s electronic chart. Speech recognition systems, which may be installed on premise or accessed remotely, translate speech into text. The technology is already well established in health care, with radiology departments at the forefront.
The new twist is speech recognition technology’s potential to become a widely used front end to an EHR system.
Reid Conant, M.D., an emergency medicine physician who practices at Tri-City Medical Center in Oceanside, Calif., believes speech recognition lies at the cusp of broader EHR implementation. Tri-City uses Nuance Communications Inc.’s Dragon Medial Enterprise Network Edition, which integrates with the hospital’s Cerner Corp. EHR system.
“We are still on the steep part of the curve,” Conant said of the adoption rate.
Industry experts cite three reasons why speech recognition technology’s role in EHR systems could be poised for growth.
- Accuracy has improved significantly, which means doctors spend less time cleaning up notes.
- EHR vendors are integrating voice recognition into their systems.
- The federal government’s meaningful use initiative has expanded EHR adoption beyond early adopters. Potentially less tech savvy mass-market users may embrace voice as an alternative to the hunt-and-peck school of data entry.
That said, the technology faces a few obstacles. Voice dictation entered as unstructured text may present problems when it comes to extracting data for reporting and analysis. Vendors, however, aim to employ natural language processing to tag key clinical data for later retrieval.
Appeal of speech recognition technology: Talk, don’t type
Steven Zuckerman, M.D., a neurologist with a solo practice in Baton Rouge, La., discovered keyboarding wasn’t his forte when he adopted EHR. “I quickly figured out that I would not be the greatest typist in the world,” he explained.
Zuckerman began exploring voice input several years ago, working with Nuance’s Dragon 7. The initial experience proved somewhat frustrating.
“When I first started trying it out, the accuracy wasn’t at the point where it was particularly efficient,” he said, noting the many corrections that had to be made following the voice-to-text conversion.
Zuckerman retried speech recognition technology a few years later with Dragon 9. He has been using the software ever since.
Improvements in accuracy have swayed other physicians, Conant noted. He often encounters clinicians who previously tried voice input but balked at the amount of correction required. The latest generation of the technology changes minds.
“They see it and they are shocked,” Conant said. “They realize they can dictate three or four detailed paragraphs of medical decision making and it is nearly perfect.”
Keith Belton, senior director of product marketing for Nuance’s health care division, noted that Dragon 7, released in 2003, had 80% out-of-the-box accuracy — that is, before a user trains the software to recognize his or her specific speech pattern. Version 10, the product included in Network Edition, features out-of-the-box accuracy in the mid to high nineties, he added.
Gregg Malkary, managing director of Spyglass Consulting Group, a mobile health IT consulting firm, acknowledged that the technology has improved significantly compared to where it stood several years back. But issues still remain with the level of accuracy, he said. Some providers may question the actual time savings of voice recognition if they still have to dive back into a document to check for accuracy.
As Malkary put it, “Is 90% good enough, or do I really need 99.9%?”
Speech recognition technology on board within EHR systems
Such concerns don’t seem to have limited adoption at Tri-City. Use of voice in clinical documentation began in the emergency department in 2007 and has continued to spread. Wound care and workers’ compensation doctors started using speech recognition technology about six months ago, Conant noted. Tri-City’s hospitalists and subspecialty doctors will go live with voice in October.
The experience of earlier users encouraged more doctors to try voice. “They are seeing their colleagues using Dragon and are requesting the application,” Conant said.
But doctors don’t necessarily have to ask for speech recognition technology to have it at their disposal, as it is increasingly becoming a built-in feature of EHR systems. Greenway Medical Technologies Inc., for example, has agreed to integrate M*Modal’s cloud-based speech recognition technology into its EHR.
Similar deals may follow. Don Fallati, senior vice president of marketing at M*Modal maker Multimodal Technologies Inc., said other EHR vendors have contacted M*Modal to discuss integration. He sees a precedent for this type of link-up in radiology, where speech is already deeply embedded in picture archiving and communications systems (PACS) and radiology information systems (RIS).
Epocrates Inc., meanwhile, plans to integrate Nuance speech recognition technology into its EHR system. Dr. Thomas Giannulli, chief medical information officer at Epocrates, said the product will feature speech alongside other data entry options such as point-and-click menus.
The arrival of voice as a standard EHR feature coincides with the government’s push for wider EHR adoption. The federal meaningful use program, which runs through 2015, offers financial incentives to doctors and hospitals deploying EHR systems.
Raj Dharampuriya, M.D., chief medical officer and co-founder of EHR vendor eClinicalWorks LLC, said Washington’s incentives have pushed the EHR market into more of a mass adoption phase.
“We’re seeing more physicians come on board that are not as computer savvy,” Dharampuriya said. “Voice provides a very nice phasing into EHRs.”
Data mining as next wave of speech recognition technology
Doctors may find voice recognition useful as an EHR input tool, but vendors aim to push the technology farther. When physicians compile text narratives via voice, they end up with unstructured data that proves hard to tap for meaningful nuggets of information. Companies such as M*Modal and Nuance work to address this issue through natural language processing.
Pairing speech with EHR marks a stage one deployment of speech recognition technology, Fallati said. He said M*Modal’s “speech understanding” technology takes the voice-entered narrative and translates it into a searchable document. The document can then be mined for purposes such as quality reporting.
Nuance, for its part, pursues “clinical language understanding” — an offshoot of natural language processing. The idea is to mine structured data from free-form text and tag the key clinical elements such as medications and health problems.
Zuckerman, the Baton Rouge neurologist, believes current developments in speech will eventually lead to the self-documenting office visit. He envisions exam rooms set up to selectively record the relevant details as doctor and patient verbally interact.
“We’re not close to that yet, but that would be great,” he said.