The Good, the Bad, and the Facts: Multimodal Representation of Medical Conversations for Patient Understanding
Medical patients face significant challenges for managing their health information. In particular, cancer patients have a uniquely difficult experience where they must endure the physical and emotional effects of their illness while simultaneously navigating overwhelming amounts of medical information. In this thesis, I focus on the challenge of reviewing and extracting information from medical appointments for cancer patients. First, I propose a novel multimodal interface to help patients review and understand information from conversations with their doctors. This interface captures medical conversations as text and audio, with important positive and negative information highlighted. Results from 25 user studies show that the interface is helpful for reviewing conversations. Second, I propose a machine learning algorithm to automatically classify positive and negative information in medical conversations based on analysis of the text and prosody in speech. The model with the highest performance on my dataset achieved an accuracy of 90.6% and F1-score of 0.888.