COMING SOON! PQDT Open is getting a new home!

ProQuest Open Access Dissertations & Theses will remain freely available as part of a new and enhanced search experience at

Questions? Please refer to this FAQ.

Dissertation/Thesis Abstract

DataVenom: A Genetic Algorithm Based Poisoning Attack Technique against Numeric Datasets
by Sloan, Joshua James, M.S., University of Wyoming, 2020, 93; 28257614
Abstract (Summary)

As an increasing number of systems rely on machine learning for producing automated decisions, attackers gain incentives to manipulate results generated by predictive models. One method to carry out such an attack on a machine learning algorithm is through the use of poisoning attacks, a method in which an attacker deliberately influences a model’s training data to manipulate the results of the predictive model. As our scope of knowledge in the machine learning domain expands, we have access to a considerable number of powerful computational tools and algorithms to aid in the overarching goal of improving automated model prediction capabilities. One example that has greatly optimized search methods can be seen through genetic algorithms, a class of algorithms which serves as an interdisciplinary too l for complex optimization problems by modeling evolutionary strategy. However, while these methods are available to the public for benevolent use, malicious practices are just as easily enabled through them. Given that the focus is primarily placed on accuracy and run-time of such models, existing countermeasures against direct attacks on the models themselves are often minimal or lacking in practical feasibility. This research explores an adversarial approach to highlight security deficiencies in the poisoning defense literature by providing a novel poisoning attack method that incorporates the strengths of a genetic algorithm’s search capabilities to generate substantial volumes of poisoned data for attacks in near real-time, while also avoiding detection from existing poisoning defense methods. Thus, this attack framework endeavors to highlight a present danger in machine learning, incentivizing further research into defense mechanisms against such attacks.

Indexing (document details)
Advisor: Borowczak, Mike
Commitee: Shader, Bryan L., Shukla, Diksha
School: University of Wyoming
Department: Computer Science
School Location: United States -- Wyoming
Source: MAI 82/7(E), Masters Abstracts International
Subjects: Computer science, Artificial intelligence
Keywords: Adversarial machine learning, Genetic algorithm, Numeric data, Poisoning attack
Publication Number: 28257614
ISBN: 9798557059961
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy