Could voice control of the EHR improve physician efficiency and satisfaction?
Farhad Manjoo, the NYTimes tech reviewer, wrote a story this week about the Amazon Echo, a device for the home using the Amazon Alexa voice recognition service to allow natural-feeling interactions with web searches and web-connected home devices. My immediate reaction was that this will be the human-computer interaction with the future EHR.
Where we now use the mouse and keyboard to search endlessly through complicated menus, in 5-10 years, we will instead be able to say aloud, “Order Mrs. Jones a metabolic panel, lipid panel, and A1C to be done today and again in 6 months. Send the order to the Quest lab. She also needs a bone density scan ordered to screen for osteoporosis. Make sure she has this scheduled and completed by the end of the year. Make a referral to Dr. Smithson in Cardiology for management of coronary artery disease. Send in a one-year refill of her metformin. At the end of our visit today, please send a letter with my full note from today’s visit to her primary care physician.”
As a physician speaks these orders, his shopping cart menu will build on-screen, allowing for verification that the system selected the right items. What now takes several minutes and immense cognitive effort could instead be completed with natural speech and minimal effort in far less time.
I hear from colleagues all the time that they are overwhelmed when using EHRs with too many buttons, menus, and too much clicking around. They feel disconnected from the patient sitting in their office, and that the computer screen has intruded in that relationship. Perhaps a natural language voice recognition system like Alexa is one step closer toward a more satisfying and connected experience for everyone.