Giving patients the chance to speak
Increasing understanding of how to categorize patient symptoms for efficient diagnosis has led to structured patient interviews and diagnostic flowcharts that can provide diagnostic accuracy and save valuable physician time. But the rigidity of predefined questions and controlled vocabulary for answers can leave patients feeling over-constrained, as if the doctor (or computer system) is not really attending to them. I'm Listening is a system for automated questioning that respects the voice of the patient and makes the task of information elicitation more enjoyable and educational.
I'm Listening does not replace a human doctor, but can be used before an office visit to prepare the patient, deliver educational materials, triage care, and preorder appropriate tests, making better use of both doctor and patient time. It uses an on-screen avatar and natural language processing to (partially) understand the patient's response. Key is a commonsense reasoning system that lets patients express themselves in unconstrained natural language, even using metaphor, and that maps the language to medically relevant categories.
The avatar animation framework used in I'm listening is powered by Oddcast.com. The commonsense reasoning is powered by Open Mind Commons and ConceptNet from the Software Agents group at the MIT Media Lab.