Emergent Mind

Evaluation Framework for Performance Limitation of Autonomous Systems under Sensor Attack

(2103.07118)
Published Mar 12, 2021 in cs.CR , cs.SY , and eess.SY

Abstract

Autonomous systems such as self-driving cars rely on sensors to perceive the surrounding world. Measures must be taken against attacks on sensors, which have been a hot topic in the last few years. For that goal one must first evaluate how sensor attacks affect the system, i.e. which part or whole of the system will fail if some of the built-in sensors are compromised, or will keep safe, etc. Among the relevant safety standards, ISO/PAS 21448 addresses the safety of road vehicles taking into account the performance limitations of sensors, but leaves security aspects out of scope. On the other hand, ISO/SAE 21434 addresses the security perspective during the development process of vehicular systems, but not specific threats such as sensor attacks. As a result the safety of autonomous systems under sensor attack is yet to be addressed. In this paper we propose a framework that combines safety analysis for scenario identification, and scenario-based simulation with sensor attack models embedded. Given an autonomous system model, we identify hazard scenarios caused by sensor attacks, and evaluate the performance limitations in the scenarios. We report on a prototype simulator for autonomous vehicles with radar, cameras and LiDAR along with attack models against the sensors. Our experiments show that our framework can evaluate how the system safety changes as parameters of the attacks and the sensors vary.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.