Emergent Mind

Abstract

Deep neural network (DNN) with the state of art performance has emerged as a viable and lucrative business service. However, those impressive performances require a large number of computational resources, which comes at a high cost for the model creators. The necessity for protecting DNN models from illegal reproducing and distribution appears salient now. Recently, trigger-set watermarking, breaking the white-box restriction, relying on adversarial training pre-defined (incorrect) labels for crafted inputs, and subsequently using them to verify the model authenticity, has been the main topic of DNN ownership verification. While these methods have successfully demonstrated robustness against removal attacks, few are effective against the tampering attacks from competitors forging the fake watermarks and dogging in the manager. In this paper, we put forth a new framework of the trigger-set watermark by embedding a unique Serial Number (relatedness less original labels) to the deep neural network for model ownership identification, which is both robust to model pruning and resist to tampering attacks. Experiment results demonstrate that the DNN Serial Number only incurs slight accuracy degradation of the original performance and is valid for ownership verification.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.