Emergent Mind

Abstract

The popularity of smart-home assistant systems such as Amazon Alexa and Google Home leads to a booming third-party application market (over 70,000 applications across the two stores). While existing works have revealed security issues in these systems, it is not well understood how to help application developers to enforce security requirements. In this paper, we perform a preliminary case study to examine the security vetting mechanisms adopted by Amazon Alexa and Google Home app stores. With a focus on the authentication mechanisms between Alexa/Google cloud and third-party application servers (i.e. endpoints), we show the current security vetting is insufficient as developer mistakes can not be effectively detected and notified. A weak authentication would allow attackers to spoof the cloud to insert/retrieve data into/from the application endpoints. We validate the attack through ethical proof-of-concept experiments. To confirm vulnerable applications have indeed passed the security vetting and entered the markets, we develop a heuristic-based searching method. We find 219 real-world Alexa endpoints that carry the vulnerability, many of which are related to critical applications that control smart home devices and electronic cars. We have notified Amazon and Google about our findings and offered our suggestions to mitigate the issue.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.