Medical researchers rely on chart reviews, in which a user manually goes through a large number of electronic medical records (EMRs), to search for evidence to answer a specific medical question. Unfortunately, scrolling through vast amounts of clinical text to produce labels is time-consuming and expensive. For example, at Vanderbilt University Medical Center, it currently costs $109 per hour for a service that pays a nurse to review patient charts and produce labels. Therefore, specific methods are needed to i) reduce the cost of doing chart reviews and ii) to support medical researchers to identify relevant text within medical notes more efficiently.
First, to reduce the cost of doing chart reviews, we developed the VBOSSA crowdsourcing platform that protects patient privacy and maintains a professional clinical crowd including medical students, nursing students and faculty from the Vanderbilt University Medical Center. With the support of the VBOSSA, medical researchers have saved over 700 hours of manual chart review with relatively accurate results (average accuracy of 86%) and average cost around $20 per hour.
Second, to boost the efficiency of crowd workers in retrieving information from unstructured medical notes, we developed a Google-style EMR search engine, which provides high-quality query recommendation and automatically refines query while the user is doing a search and reviewing documents. Underpinning the EMR search engine are three novel approaches to:
(1) Extract clinically similar terms from multiple EMR-based word embeddings;
(2) Represent the medical contexts of clinical terms in a usage vector space and then leverage the usage vector space to better learn the users’ preferred similar terms;
(3) Propose two novel ranking metrics, negative guarantee ratio(NGR) and critical document, based on the user experience analysis in chart reviews.
The EMR search engine was systematically evaluated and achieved high performance in different information retrieval tasks, user studies, timing studies, and query recommendation tasks. We also evaluated different ranking and learning-to-rank methods using the NGR and critical document ranking metrics and discuss future directions in developing high-quality ranking methods to support chart reviews.
|Commitee:||Malin, Bradley, Vorobeychik, Yevgeniy, Kunda, Maithilee, Chen, You|
|School Location:||United States -- Tennessee|
|Source:||DAI-B 82/9(E), Dissertation Abstracts International|
|Subjects:||Computer science, Information Technology, Bioinformatics|
|Keywords:||Electronic Medical Records, Medical usage contexts, Query expansion, Search engines, Vector space model, Clinical chart|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be