Emergent Mind

Abstract

In this work, we study policy poisoning through state manipulation, also known as sensor spoofing, and focus specifically on the case of an agent forming a control policy through batch learning in a linear-quadratic (LQ) system. In this scenario, an attacker aims to trick the learner into implementing a targeted malicious policy by manipulating the batch data before the agent begins its learning process. An attack model is crafted to carry out the poisoning strategically, with the goal of modifying the batch data as little as possible to avoid detection by the learner. We establish an optimization framework to guide the design of such policy poisoning attacks. The presence of bi-linear constraints in the optimization problem requires the design of a computationally efficient algorithm to obtain a solution. Therefore, we develop an iterative scheme based on the Alternating Direction Method of Multipliers (ADMM) which is able to return solutions that are approximately optimal. Several case studies are used to demonstrate the effectiveness of the algorithm in carrying out the sensor-based attack on the batch-learning agent in LQ control systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.