As an increasing number of systems rely on machine learning for producing automated decisions, attackers gain incentives to manipulate results generated by predictive models. One method to carry out such an attack on a machine learning algorithm is through the use of poisoning attacks, a method in which an attacker deliberately influences a model’s training data to manipulate the results of the predictive model. As our scope of knowledge in the machine learning domain expands, we have access to a considerable number of powerful computational tools and algorithms to aid in the overarching goal of improving automated model prediction capabilities. One example that has greatly optimized search methods can be seen through genetic algorithms, a class of algorithms which serves as an interdisciplinary too l for complex optimization problems by modeling evolutionary strategy. However, while these methods are available to the public for benevolent use, malicious practices are just as easily enabled through them. Given that the focus is primarily placed on accuracy and run-time of such models, existing countermeasures against direct attacks on the models themselves are often minimal or lacking in practical feasibility. This research explores an adversarial approach to highlight security deficiencies in the poisoning defense literature by providing a novel poisoning attack method that incorporates the strengths of a genetic algorithm’s search capabilities to generate substantial volumes of poisoned data for attacks in near real-time, while also avoiding detection from existing poisoning defense methods. Thus, this attack framework endeavors to highlight a present danger in machine learning, incentivizing further research into defense mechanisms against such attacks.
|Commitee:||Shader, Bryan L., Shukla, Diksha|
|School:||University of Wyoming|
|School Location:||United States -- Wyoming|
|Source:||MAI 82/7(E), Masters Abstracts International|
|Subjects:||Computer science, Artificial intelligence|
|Keywords:||Adversarial machine learning, Genetic algorithm, Numeric data, Poisoning attack|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be