Dissertation/Thesis Abstract

A binary classifier for test case feasibility applied to automatically generated tests of event-driven software
by Robbins, Bryan Thomas, III, Ph.D., University of Maryland, College Park, 2016, 135; 10128641
Abstract (Summary)

Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These infeasible test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide.

In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments.

The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier.

To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.

Indexing (document details)
Advisor: Memon, Atif
Commitee: Daume, Hal, Goodman, Jordan, Reggia, James, Sussman, Alan
School: University of Maryland, College Park
Department: Computer Science
School Location: United States -- Maryland
Source: DAI-B 77/10(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Computer science
Keywords: Automation, Graphical user interfaces, Logistic regression, Software engineering, Software testing
Publication Number: 10128641
ISBN: 9781339866420
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest