Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer Learning (2008.11089v1)

Published 25 Aug 2020 in cs.LG and stat.ML

Abstract: Transfer learning has become a common practice for training deep learning models with limited labeled data in a target domain. On the other hand, deep models are vulnerable to adversarial attacks. Though transfer learning has been widely applied, its effect on model robustness is unclear. To figure out this problem, we conduct extensive empirical evaluations to show that fine-tuning effectively enhances model robustness under white-box FGSM attacks. We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model. To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model. Empirical results show that the adversarial examples are more transferable when fine-tuning is used than they are when the two networks are trained independently.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yinghua Zhang (8 papers)
  2. Yangqiu Song (196 papers)
  3. Jian Liang (162 papers)
  4. Kun Bai (24 papers)
  5. Qiang Yang (202 papers)
Citations (27)

Summary

We haven't generated a summary for this paper yet.