Dissertation/Thesis Abstract

Towards the Exploration and Improvement of Generative Adversarial Attacks
by Leone, Stephen, M.E., The Cooper Union for the Advancement of Science and Art, 2020, 130; 27994576
Abstract (Summary)

Adversarial examples represent a major security threat for emerging technologies that use machine learning models to make important decisions. Adversarial examples are typically crafted by performing gradient descent on the target model, but recently, researchers have started to look at methods to generate adversarial examples without requiring access to the target model at decision time. This thesis investigates the place that generative attacks have in the machine learning security threat landscape and improvements that can be made on existing attacks. By evaluating attacks and defenses on image classiļ¬cation problems, we found that generative attackers are most relevant in black box settings where query access is given to the attacker before test time. We evaluate generative methods in these settings as standalone attacks and we give examples of how generative attacks can reduce the number of queries or amount of time required to produce a successful adversarial example by narrowing the search space of existing optimization algorithms.

Indexing (document details)
Advisor: Fontaine, Fred L.
Commitee: Shoop, Barry L.
School: The Cooper Union for the Advancement of Science and Art
Department: Electrical Engineering
School Location: United States -- New York
Source: MAI 82/4(E), Masters Abstracts International
Source Type: DISSERTATION
Subjects: Artificial intelligence, Electrical engineering, Computer science
Keywords: Adversarial Examples, Computer Security, Deep Learning, Generative Attacks, Machine Learning
Publication Number: 27994576
ISBN: 9798684653148
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest