Dissertation/Thesis Abstract

The author has requested that access to this graduate work be delayed until 2019-05-30. After this date, this graduate work will be available on an open access basis.
Multimodal Deep Learning to Enhance the Practice of Radiology
by Badgeley, Marcus Alexander, Ph.D., Icahn School of Medicine at Mount Sinai, 2019, 236; 10974774
Abstract (Summary)

Machine learning and deep learning have demonstrated extraordinary potential in many disciplines and medical research studies. Deep learning algorithms for image recognition have rapidly evolved over the past decade, and there is interest in applying these new algorithms to pathology and radiology. This thesis reconstructs the translational gap for deep learning in clinical radiology. I provide practical tools for collecting more data and testing human enhancement and demonstrate symbiotic implementations for clinical practice. I then raise new theoretical issues to the eventual deployment of these technologies; specifically, how to incorporate these technologies with existing medical information and train models that are suitable for widespread deployment.

Deep learning requires an abundance of data and algorithms are commonly tested in isolation. The lack of clinical validation beclouds the question of when and how to deploy these technologies. The Computer Aided Note and Diagnosis Interface (CANDI) uses a browser- based platform for distributed generation of annotation data and performs randomized controlled trials for Computer Aided Diagnosis (CAD) utilities: 1) similar image search, 2) diagnosis prediction, and 3) pathology localization.

The most salient strength of computers is their rapid processing speed, which is particularly useful in the context of managing acute neurologic diseases. The second study develops a Convolutional Neural Network (CNN) that scans CT radiographs for evidence of critical pathology with below human accuracy but much more rapidly. This CNN can triage radiographs to expedite expert interpretation of time-sensitive cases.

Finally, I use statistical approaches to investigate how covariates are encoded by deep learning’s “black box” and question the impact of this behavior. Using a single-site dataset with extensive metadata descriptors, I show how models can be improved by encoding biological and hospital process factors associated with hip fracture, at least when tested in isolation. But simulations of clinicians combining image-model predictions with overlapping patient data reveal limitations for CAD applications. Using multi-site dataset combinations, we show that heterogeneous patient populations and technical differences in image acquisition can benefit a model’s performance on internal test data but subvert a model's external validity. We use clinical trial design to train a model that is less stellar on internal testing but more robust for worldwide deployment.

Indexing (document details)
Advisor: Dudley, Joel T., Snyder, Thomas M.
Commitee: Chen, Ben, Ma'ayan, Avi, Pandey, Gaurav, Sharp, Andrew, Tatonetti, Nick
School: Icahn School of Medicine at Mount Sinai
Department: Genetics and Genomic Sciences
School Location: United States -- New York
Source: DAI-B 80/04(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Medical imaging, Computer science
Keywords: Computer vision, Encoding process factors, Multi-site dataset combinations, Translational research
Publication Number: 10974774
ISBN: 9780438706361
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest