Emergent Mind

Evaluating the Security and Privacy Risk Postures of Virtual Assistants

(2312.14633)
Published Dec 22, 2023 in cs.CR and cs.SE

Abstract

Virtual assistants (VAs) have seen increased use in recent years due to their ease of use for daily tasks. Despite their growing prevalence, their security and privacy implications are still not well understood. To address this gap, we conducted a study to evaluate the security and privacy postures of eight widely used voice assistants: Alexa, Braina, Cortana, Google Assistant, Kalliope, Mycroft, Hound, and Extreme. We used three vulnerability testing tools, AndroBugs, RiskInDroid, and MobSF, to assess the security and privacy of these VAs. Our analysis focused on five areas: code, access control, tracking, binary analysis, and sensitive data confidentiality. The results revealed that these VAs are vulnerable to a range of security threats, including not validating SSL certificates, executing raw SQL queries, and using a weak mode of the AES algorithm. These vulnerabilities could allow malicious actors to gain unauthorized access to users' personal information. This study is a first step toward understanding the risks associated with these technologies and provides a foundation for future research to develop more secure and privacy-respecting VAs.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.