COMING SOON! PQDT Open is getting a new home!

ProQuest Open Access Dissertations & Theses will remain freely available as part of a new and enhanced search experience at www.proquest.com.

Questions? Please refer to this FAQ.

Dissertation/Thesis Abstract

Background and Occlusion Defenses Against Adversarial Examples and Adversarial Patches
by McCoyd, Michael J., Ph.D., University of California, Berkeley, 2020, 75; 28090541
Abstract (Summary)

Machine learning is increasingly used to make sense of our world in areas from spam detection, recommendation systems, to image classification. However, in each, it is vulnerable to adversarial manipulation. Within adversarial machine learning, we examine image classification attacks and defenses. We construct spoofs of face detection, and we create defenses against two attacks on image classification: normal and patch adversarial example attacks.

We examine the Viola-Jones 2D face detection algorithm to study whether images can be created that humans do not notice as faces, yet the algorithm detects as faces. We show that it is possible to construct images that Viola-Jones recognizes as containing faces, yet no human would consider a face. Moreover, we show that it is possible to construct images that fool facial detection even after the images are printed and then photographed.

Adversarial examples allow crafted attacks against deep neural network classification of images. The attack changes the computer classification of an image without changing how humans classify it.

We propose a defense of expanding the training set with a single, large, and diverse class of background images, striving to ‘fill’ around the borders of the classification boundary. We find that our defense aids the detection of simple attacks on EMNIST, but not advanced attacks. We discuss several limitations of our examination.

An attacker limited to changing just a small patch of an image can still deceive deep learning image classification. We propose a defense against such patch attacks based on multiple partial occlusions of the image such that a few occlusions each completely hide the patch. We provide certified accuracy for CIFAR-10, Fashion MNIST and MNIST, with a tunable tradeoff between the false-positive rate and certified accuracy. For CIFAR-10 and a 5 × 5 patch, we can provide certified accuracy for 43.8% of images, at the cost of only 1.6% in clean image accuracy compared to the architecture we defend or a cost of 0.1% compared to our training of that architecture, including a 0.2% false-positive rate.

Indexing (document details)
Advisor: Wagner, David A.
Commitee: Joseph, Anthony D., Bamman, David
School: University of California, Berkeley
Department: Electrical Engineering & Computer Sciences
School Location: United States -- California
Source: DAI-A 82/5(E), Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Computer science, Information Technology, Management, Artificial intelligence
Keywords: Adversarial examples, Adversarial machine learning, Adversarial patch, Image classification, Partial occlusions ensemble defense, Machine learning, Spam detection
Publication Number: 28090541
ISBN: 9798691237362
Copyright © 2021 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest