Papers
arxiv:1803.05787

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Published on Mar 14, 2018
Authors:
,
,
,
,
,
,

Abstract

A JPEG-based defensive compression framework using feature distillation effectively defends against adversarial examples while maintaining benign image classification accuracy through frequency domain filtering and DNN-oriented quantization refinement.

AI-generated summary

Image compression-based approaches for defending against the adversarial-example attacks, which threaten the safety use of deep neural networks (DNN), have been investigated recently. However, prior works mainly rely on directly tuning parameters like compression rate, to blindly reduce image features, thereby lacking guarantee on both defense efficiency (i.e. accuracy of polluted images) and classification accuracy of benign images, after applying defense methods. To overcome these limitations, we propose a JPEG-based defensive compression framework, namely "feature distillation", to effectively rectify adversarial examples without impacting classification accuracy on benign data. Our framework significantly escalates the defense efficiency with marginal accuracy reduction using a two-step method: First, we maximize malicious features filtering of adversarial input perturbations by developing defensive quantization in frequency domain of JPEG compression or decompression, guided by a semi-analytical method; Second, we suppress the distortions of benign features to restore classification accuracy through a DNN-oriented quantization refine process. Our experimental results show that proposed "feature distillation" can significantly surpass the latest input-transformation based mitigations such as Quilting and TV Minimization in three aspects, including defense efficiency (improve classification accuracy from sim20% to sim90% on adversarial examples), accuracy of benign images after defense (le1% accuracy degradation), and processing time per image (sim259times Speedup). Moreover, our solution can also provide the best defense efficiency (sim60% accuracy) against the recent adaptive attack with least accuracy reduction (sim1%) on benign images when compared with other input-transformation based defense methods.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1803.05787 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1803.05787 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/1803.05787 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.