Emergent Mind

Abstract

In this paper, we propose an effective and easily deployable approach to detect the presence of stealthy sensor attacks in industrial control systems, where (legacy) control devices critically rely on accurate (and usually non-encrypted) sensor readings. Specifically, we focus on stealthy attacks that crash a sensor and then immediately impersonate that sensor by sending out fake readings. We consider attackers who aim to stay hidden in the system for a prolonged period. To detect such attacks, our approach relies on continuous injection of "micro distortion" to the original sensor's readings. In particular, the injected distortion should be kept strictly within a small magnitude (e.g., $0.5\%$ of the possible operating value range), to ensure it does not affect the normal functioning of the ICS. Our approach uses a pre-shared secret sequence between a sensor and the defender to generate the micro-distortions. One key challenge is that the micro-distortions injected are often much lower than the sensor's actual readings, hence can be easily overwhelmed by the latter. To overcome this, we leverage the observation that sensor readings in many ICS (and power grid in particular) often change gradually in a significant fraction of time (i.e., with small difference between consecutive time slots). We devise a simple yet effective algorithm that can detect stealthy attackers in a highly accurate and fast (i.e., using less than 100 samples) manner. We demonstrate the effectiveness of our defense using real-world sensor reading traces from two different smart grid systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.