Publications

Detailed Information

Improving Adversarial Robustness Using Pixel Intensity Encryption : 픽셀 강도 암호화를 통한 적대적 강건성 강화

DC Field Value Language
dc.contributor.advisorWonjong Rhee-
dc.contributor.author이윤아-
dc.date.accessioned2022-04-05T06:46:50Z-
dc.date.available2022-04-05T06:46:50Z-
dc.date.issued2021-
dc.identifier.other000000166852-
dc.identifier.urihttps://hdl.handle.net/10371/177827-
dc.identifier.urihttps://dcollection.snu.ac.kr/common/orgView/000000166852ko_KR
dc.description학위논문(석사) -- 서울대학교대학원 : 융합과학기술대학원 지능정보융합학과, 2021.8. 이윤아.-
dc.description.abstractNeural networks are known to be vulnerable to gradient-based adversarial examples which are made by leveraging input gradients toward misclassification. Due to these attacks, adversarial defense has become a topic of significant interest in recent years. The most empirically successful approach to defending against such adversarial examples is adversarial training, which incorporates a strong self-attack during training. However, this approach is computationally expensive and hence is hard to scale up. As a result, a series of studies has been undertaken to develop gradient masking methods. One of the method is to to hide the gradient using encryption. This was achieved by transforming the location of pixels. However, there have been no studies regarding how pixel-intensity encryption could work as an adversarial defense.
This study proposes a new defense method that uses pixel intensity encryption to defend against the gradient-based attacks. Furthermore, A new adaptive attack setup for encryption methods is presented in the study to evaluate its effectiveness as an adversarial defense. The experiment shows that the proposed defense is more robust than that of the previous studies under adaptive attack. Moreover, the correlation coefficient of an image is found to make the key role on learnability of the model.
-
dc.description.tableofcontentsI. Introduction 1
1.1 Terminology 3
II. Related Work 5
2.1 Gradient-based Attack 5
2.2 Gradient Masking Defense
2.2.1 Obfuscated Gradients 6
2.2.2 Adversarial Encryption Defense 8
III. Research Questions 10
IV. Proposed Method 11
4.1 Pixel-Intensity encryption
4.1.1 Affine encryption 12
4.1.2 Pixel-intensity shuffling 13
4.2 Upgraded Encryption 15
4.3 Adaptive attack framework for adversarial encryption defense 18
V. Experiments 21
5.1 Setup 22
5.2 Learnability 23
5.2.1 Experiment Design 23
5.2.2 Experiment Results 27
5.3 Adversarial Robustness
5.3.1 Experiment Design 34
5.3.2 Experiment Results 34
VI. Discussion 39
6.1 Discussion 39
6.2 Limitations and Future Work 41
VII. Conclusion 43
References 45
-
dc.format.extentVII, 47-
dc.language.isoeng-
dc.publisher서울대학교 대학원-
dc.subjectAdversarial Examples-
dc.subjectadversarial attack-
dc.subjectadversarial defense-
dc.subjectperceptual image encryption-
dc.subject.ddc006.3-
dc.titleImproving Adversarial Robustness Using Pixel Intensity Encryption-
dc.title.alternative픽셀 강도 암호화를 통한 적대적 강건성 강화-
dc.typeThesis-
dc.typeDissertation-
dc.contributor.AlternativeAuthorYoonah Lee-
dc.contributor.department융합과학기술대학원 지능정보융합학과-
dc.description.degree석사-
dc.date.awarded2021-08-
dc.identifier.uciI804:11032-000000166852-
dc.identifier.holdings000000000046▲000000000053▲000000166852▲-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share