Emergent Mind

Adversarial Attack and Defense on Point Sets

(1902.10899)
Published Feb 28, 2019 in cs.CV , cs.AI , cs.CR , and cs.LG

Abstract

Emergence of the utility of 3D point cloud data in safety-critical vision tasks (e.g., ADAS) urges researchers to pay more attention to the robustness of 3D representations and deep networks. To this end, we develop an attack and defense scheme, dedicated to 3D point cloud data, for preventing 3D point clouds from manipulated as well as pursuing noise-tolerable 3D representation. A set of novel 3D point cloud attack operations are proposed via pointwise gradient perturbation and adversarial point attachment / detachment. We then develop a flexible perturbation-measurement scheme for 3D point cloud data to detect potential attack data or noisy sensing data. Notably, the proposed defense methods are even effective to detect the adversarial point clouds generated by a proof-of-concept attack directly targeting the defense. Transferability of adversarial attacks between several point cloud networks is addressed, and we propose an momentum-enhanced pointwise gradient to improve the attack transferability. We further analyze the transferability from adversarial point clouds to grid CNNs and the inverse. Extensive experimental results on common point cloud benchmarks demonstrate the validity of the proposed 3D attack and defense framework.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.