Emergent Mind

Enabling Developers, Protecting Users: Investigating Harassment and Safety in VR

(2403.05499)
Published Mar 8, 2024 in cs.HC , cs.CY , cs.ET , and cs.CR

Abstract

Virtual Reality (VR) has witnessed a rising issue of harassment, prompting the integration of safety controls like muting and blocking in VR applications. However, the lack of standardized safety measures across VR applications hinders their universal effectiveness, especially across contexts like socializing, gaming, and streaming. While prior research has studied safety controls in social VR applications, our user study (n = 27) takes a multi-perspective approach, examining both users' perceptions of safety control usability and effectiveness as well as the challenges that developers face in designing and deploying VR safety controls. We identify challenges VR users face while employing safety controls, such as finding users in crowded virtual spaces to block them. VR users also find controls ineffective in addressing harassment; for instance, they fail to eliminate the harassers' presence from the environment. Further, VR users find the current methods of submitting evidence for reports time-consuming and cumbersome. Improvements desired by users include live moderation and behavior tracking across VR apps; however, developers cite technological, financial, and legal obstacles to implementing such solutions, often due to a lack of awareness and high development costs. We emphasize the importance of establishing technical and legal guidelines to enhance user safety in virtual environments.

Overview

  • The paper investigates harassment in VR, focusing on user experiences with safety controls and developer perspectives on safety implementation challenges.

  • Harassment in VR includes verbal abuse, sexual harassment, and physical intimidation, with existing safety controls like muting and blocking being only partially effective.

  • Users face challenges in using safety tools effectively, citing a lack of awareness and cumbersome processes, while reporting harassment is seen as time-consuming.

  • Recommendations for improving VR safety include enhancing awareness and usability of controls, developing standardized safety protocols, leveraging AI for moderation, and balancing privacy with safety.

Investigating the Efficacy of Safety Controls in VR: A Multi-perspective User and Developer Study

Introduction

Virtual Reality (VR) technology has increasingly become a medium for immersive communication and interaction. Despite its potential for innovative social connections and entertainment, VR experiences are marred by instances of harassment that threaten user safety. Addressing the issue of harassment in VR is crucial to foster a safe and inclusive environment. This paper presents a comprehensive examination of harassment in VR, focusing on the user experiences with existing safety controls and the perspectives of developers on the implementation challenges and solutions for enhancing user safety.

User Experiences with Safety Controls

Our study included interviews with 18 individuals who have faced harassment in VR. The analysis revealed that harassment in VR takes various forms, including verbal abuse, sexual harassment, and physical intimidation, mirroring the complexity of online abuse while exploiting the immersive nature of VR. Despite the availability of safety controls like muting, blocking, and reporting mechanisms, participants experienced numerous challenges in using these tools effectively. Key findings include:

  • Awareness and Usage: While some users were familiar with the basic safety controls available in VR applications, a lack of awareness or the cumbersome processes involved deterred the consistent use of these controls.
  • Effectiveness and Limitations: Participants found safety controls to be partially effective, with limitations in real-time efficacy and feedback to the harasser. For instance, muting or blocking doesn't remove the harasser from the environment or stop others from witnessing the harassment.
  • Reporting Challenges: The process of reporting harassment incidents is perceived as time-consuming and complex, often requiring evidence that users may not have readily available. This, coupled with unclear feedback on the actions taken against reported misconduct, discourages users from utilizing reporting mechanisms.

Developer Perspectives on VR Safety

Interviews with nine VR developers provided insights into the challenges and considerations in designing and deploying safety controls. Developers acknowledged the significance of user safety but highlighted several obstacles, including:

  • Resource Limitations and Prioritization: Small development teams and prioritization of other features over safety controls are common, with a consensus that safety is often addressed reactively rather than proactively.
  • Technical and Legal Hurdles: The implementation of sophisticated safety features, such as live moderation and behavior tracking, faces technical, financial, and legal constraints. Developers also expressed concerns regarding privacy issues associated with pervasive user monitoring.
  • Lack of Industry-wide Standards: The absence of universal guidelines or standards for VR safety controls complicates the development process and makes it difficult to ensure a consistent user experience across different VR platforms.

Towards Improved Safety in VR

The paper identifies several recommendations for improving safety in VR environments based on user and developer perspectives. These include:

  • Enhancing Awareness and Usability: Simplifying the process of activating and using safety controls can encourage more users to take advantage of these features. Additionally, developers and platform owners should invest in user education and awareness campaigns about safety controls.
  • Developing Standardized Safety Protocols: Establishing industry-wide safety standards and guidelines can help unify the approach to user safety across VR platforms and applications.
  • Innovative Use of Technology for Moderation: Leveraging AI and machine learning for real-time abuse detection and moderation can offer scalable solutions while minimizing privacy concerns. Incorporating community-driven moderation approaches can also enhance the effectiveness of safety controls.
  • Balancing Privacy with Safety: Addressing safety concerns must not come at the expense of user privacy. Developing privacy-preserving measures for identity verification and behavior monitoring is crucial.

Conclusion

The juxtaposition of user experiences with developer insights in this study underscores the complexity of addressing harassment in VR. Ensuring a safe VR environment necessitates a multi-faceted approach that combines technological innovation, community involvement, and industry collaboration. By prioritizing user safety and implementing effective and user-friendly safety controls, VR developers and platform owners can create more inclusive and welcoming virtual spaces for all users.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.