Emergent Mind

Persuasion-based Robust Sensor Design Against Attackers with Unknown Control Objectives

(1901.10618)
Published Jan 30, 2019 in eess.SY , cs.CR , and cs.SY

Abstract

In this paper, we introduce a robust sensor design framework to provide "persuasion-based" defense in stochastic control systems against an unknown type attacker with a control objective exclusive to its type. For effective control, such an attacker's actions depend on its belief on the underlying state of the system. We design a robust "linear-plus-noise" signaling strategy to encode sensor outputs in order to shape the attacker's belief in a strategic way and correspondingly to persuade the attacker to take actions that lead to minimum damage with respect to the system's objective. The specific model we adopt is a Gauss-Markov process driven by a controller with a (partially) "unknown" malicious/benign control objective. We seek to defend against the worst possible distribution over control objectives in a robust way under the solution concept of Stackelberg equilibrium, where the sensor is the leader. We show that a necessary and sufficient condition on the covariance matrix of the posterior belief is a certain linear matrix inequality and we provide a closed-form solution for the associated signaling strategy. This enables us to formulate an equivalent tractable problem, indeed a semi-definite program, to compute the robust sensor design strategies "globally" even though the original optimization problem is non-convex and highly nonlinear. We also extend this result to scenarios where the sensor makes noisy or partial measurements. Finally, we analyze the ensuing performance numerically for various scenarios.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.