Dissertation/Thesis Abstract

Hybrid Model for Interpretable Time Series Analysis
by Rosales, Ruben, M.S., California State University, Long Beach, 2020, 28; 27835218
Abstract (Summary)

In this thesis, we study interpretable machine learning as applied to complex-valued time-series data. Scientists have studied the use of several machine learning methods such as Convolutional Neural Networks, Recurrent Neural Networks, and Support Vector Machines for time series classification. These methods, however, fall short of allowing users to visualize patterns, which can be a necessity for adoption in highly scrutinized industries.

To address this issue, we propose an interpretable hybrid model that can be extended to any time series dataset. In the majority of existing work in the field of interpretability, black-box models tend to outperform white-box models consistently. However, instead of relying purely on one method, we propose a collaboration between the two. This allows for the performance of a black-box model while providing a more precise visualization of our dataset.

To accomplish this, we created a framework composed of two models, an ensemble method of classifiers that functions as a black-box model, and a white-box model that encodes time series as images. The white-box model attempts to classify them through a neural network and outputs all images correctly classified in both black-box and white-box models.

Indexing (document details)
Advisor: Moon, Ju Cheol
Commitee: Tankelevich, Roman, Kwon, Sean (Seok-Chul)
School: California State University, Long Beach
Department: Computer Engineering and Computer Science
School Location: United States -- California
Source: MAI 82/4(E), Masters Abstracts International
Subjects: Computer science, Artificial intelligence, Information science
Keywords: Machine learning, Interpretable time, Time-series data
Publication Number: 27835218
ISBN: 9798684675010
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy