Northwestern Polytechnical University
|
With the wide application of deep neural network models in various computer vision tasks, there has been a proliferation of adversarial example generation strategies aimed at deeply exploring model security. However, existing adversarial training defense models, which rely on single or limited types of attacks under a one-time learning process, struggle to adapt to the dynamic and evolving nature of attack methods. Therefore, to achieve defense performance improvements for models in long-term applications, we propose a novel Sustainable Self-Evolution Adversarial Training (SSEAT) framework. Specifically, we introduce a continual adversarial defense pipeline to realize learning from various kinds of adversarial examples across multiple stages. Additionally, to address the issue of model catastrophic forgetting caused by continual learning from ongoing novel attacks, we propose an adversarial data replay module to better select more diverse and key relearning data. Furthermore, we design a consistency regularization strategy to encourage current defense models to learn more from previously trained ones, guiding them to retain more past knowledge and maintain accuracy on clean samples. Extensive experiments have been conducted to verify the efficacy of the proposed SSEAT defense method, which demonstrates superior defense performance and classification accuracy compared to competitors.
|
When confronted with the challenge of ongoing generated new adversarial examples in complex and long-term multimedia applications, existing adversarial training methods struggle to adapt to iteratively updated attack methods. In contrast, our SSEAT model achieves sustainable defense performance improvements by continuously absorbing new adversarial knowledge.
|
A conceptual overview of SSEAT.
|
Existing adversarial training models are difficult to resist diverse adversarial examples. It is essential for deep models to achieve sustainable
improvement in defense performance for long-term application
scenarios
|
(a) Illustration of our Continual Adversarial Defense (CAD) pipeline. CAD helps the model to learn from new kinds of attacks in multiple stages continuously. (b) Illustration of our Adversarial Data Replay (ADR) module. ADR guides the model in selecting diverse and representative replay data to alleviate the catastrophic forgetting issue. (c) Illustration of our Consistency Regularization Strategy (CRS) component. CRS encourages the model to learn more from the historically trained models to maintain classification accuracy.
|
SSEAT framework
|
Sustainable Self-evolution Adversarial Training
Wenxuan Wang, Chenglei Wang, Huihui Qi, Menghao Ye, Xuelin Qian*, Peng Wang, Yanning Zhang [Paper] [Code] |
Visualization result
|
Acknowledgements
The website is modified from this template.
|