Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
157 tokens/sec
GPT-4o
43 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

XR Input Error Mediation for Hand-Based Input: Task and Context Influences a User's Preference (2309.10899v1)

Published 19 Sep 2023 in cs.HC

Abstract: Many XR devices use bare-hand gestures to reduce the need for handheld controllers. Such gestures, however, lead to false positive and false negative recognition errors, which detract from the user experience. While mediation techniques enable users to overcome recognition errors by clarifying their intentions via UI elements, little research has explored how mediation techniques should be designed in XR and how a user's task and context may impact their design preferences. This research presents empirical studies about the impact of user perceived error costs on users' preferences for three mediation technique designs, under different simulated scenarios that were inspired by real-life tasks. Based on a large-scale crowd-sourced survey and an immersive VR-based user study, our results suggest that the varying contexts within each task type can impact users' perceived error costs, leading to different preferred mediation techniques. We further discuss the study implications of these results on future XR interaction design.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (44)
  1. Adobe XD — Fast & Powerful UI/UX Design & Collaboration Tool. ttps://www.adobe.com/products/xd.html, 2023.
  2. https://dscout.com/, 2023.
  3. Dwell - MRTK 2 — Microsoft Docs. https://docs.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/features/ux-building-blocks/dwell?view=mrtkunity-2021-05, 2023.
  4. Qualtrics XM - experience management software. https://www.qualtrics.com/, 2023.
  5. Undo sending your mail. https://support.google.com/a/users/answer/9308651?hl=en, Aug 2023.
  6. Unity real-time development platform. https://www.unity.com, 2023.
  7. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–13. ACM, Glasgow Scotland Uk, May 2019. doi: 10 . 1145/3290605 . 3300233
  8. O. Bau and W. E. Mackay. OctoPocus: a dynamic guide for learning gesture-based command sets. In Proceedings of the 21st annual ACM symposium on User interface software and technology - UIST ’08, p. 37. ACM Press, Monterey, CA, USA, 2008. doi: 10 . 1145/1449715 . 1449724
  9. R. A. Bolt. “put-that-there” voice and gesture at the graphics interface. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pp. 262–270. ACM, New York, New York, USA, 1980.
  10. M.-L. Bourguet. Towards a taxonomy of error-handling strategies in recognition-based multi-modal human–computer interfaces. Signal Processing, 86(12):3625–3643, Dec. 2006. doi: 10 . 1016/j . sigpro . 2006 . 02 . 047
  11. J. Carroll. Five reasons for scenario-based design. Interacting with Computers, 13(1):43–60, 09 2000. doi: 10 . 1016/S0953-5438(00)00023-0
  12. J. M. Carroll. Making use: scenario-based design of human-computer interactions. MIT press, 2003.
  13. F. D. Davis. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 13:319–340, 1989.
  14. A. K. Dey and J. Mankoff. Designing mediation for context-aware applications. ACM Transactions on Computer-Human Interaction (TOCHI), 12(1):53–80, 2005.
  15. An aligned rank transform procedure for multifactor contrast tests. arXiv preprint arXiv:2102.11824, 2021.
  16. The aligned rank transform procedure. 1990.
  17. Design and analysis of delimiters for selection-action pen gesture phrases in scriboli. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 451–460, 2005.
  18. E. Horvitz. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems the CHI is the limit - CHI ’99, pp. 159–166. ACM Press, Pittsburgh, Pennsylvania, United States, 1999. doi: 10 . 1145/302979 . 303030
  19. E. Horvitz. Uncertainty, Action, and Interaction: In Pursuit of Mixed-Initiative Computing. Intelligent Systems, 14:8, 1999.
  20. ” move the couch where?”: developing an augmented reality multimodal interface. In 2006 IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 183–186. IEEE, New York, New York, USA, 2006.
  21. Snap clutch, a moded approach to solving the midas touch problem. In Proceedings of the 2008 symposium on Eye tracking research & applications, pp. 221–228, 2008.
  22. R. J. Jacob. What you look at is what you get: eye movement-based interaction techniques. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 11–18, 1990.
  23. The Role of AI in Mixed and Augmented Reality Interactions. Association for Computing Machinery, New York, New York, USA, 2020. OCLC: 1175624697.
  24. How Good is 85%? A Survey Tool to Connect Classifier Evaluation to Acceptability of Accuracy. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, pp. 347–356. Association for Computing Machinery, New York, NY, USA, Apr. 2015. doi: 10 . 1145/2702123 . 2702603
  25. False positives vs. false negatives: The effects of recovery time and cognitive costs on input error preference. In The 34th Annual ACM Symposium on User Interface Software and Technology, UIST ’21, p. 54–68. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10 . 1145/3472749 . 3474735
  26. Providing integrated toolkit-level support for ambiguity in recognition-based interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 368–375. ACM, New York, New York, USA, 2000.
  27. Interaction techniques for ambiguity resolution in recognition-based interfaces. In ACM SIGGRAPH 2006 Courses, SIGGRAPH ’06, pp. 6–es. Association for Computing Machinery, Boston, Massachusetts, July 2006. doi: 10 . 1145/1185657 . 1185767
  28. Error Correction Techniques for Handwriting, Speech, and other ambiguous or error prone systems. In Georgia Institute of Technology Technical Report, p. 9. Georgia Institute of Technology, Atlanta, Georgia, USA, 1999.
  29. Keep doing what i just did: automating smartphones by demonstration. In Proceedings of the 15th international conference on Human-computer interaction with mobile devices and services, pp. 295–303. ACM, New York, New York, USA, 2013.
  30. Dualgaze: Addressing the midas touch problem in gaze mediated vr interaction. In 2018 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 79–84. IEEE, 2018.
  31. Senseshapes: Using statistical geometry for object selection in a multimodal augmented reality. In The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings., pp. 300–301. IEEE, New York, New York, USA, 2003.
  32. Gaze as an indicator of input recognition errors. Proc. ACM Hum.-Comput. Interact., 6(ETRA), may 2022. doi: 10 . 1145/3530883
  33. Grasp-shell vs gesture-speech: A comparison of direct and indirect natural interaction techniques in augmented reality. In 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 73–82. IEEE, New York, New York, USA, 2014.
  34. Automation Accuracy Is Good, but High Controllability May Be Better. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–8. ACM, Glasgow Scotland Uk, May 2019. doi: 10 . 1145/3290605 . 3300750
  35. The effects of immersiveness and future vr expectations on subjec-tive-experiences during an educational 360 video. In Proceedings of the human factors and ergonomics society annual meeting, vol. 60, pp. 2108–2112. SAGE Publications Sage CA: Los Angeles, CA, 2016.
  36. A framework for robust and flexible handling of inputs with uncertainty. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pp. 47–56. Association for Computing Machinery, New York, NY, USA, Oct. 2010.
  37. Combining body pose, gaze, and gesture to determine intention to interact in vision-based interfaces. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 3443–3452, 2014.
  38. Detecting input recognition errors and user errors using gaze dynamics in virtual reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST ’22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3526113 . 3545628
  39. D. Vogel and R. Balakrishnan. Distant freehand pointing and clicking on very large, high resolution displays. In Proceedings of the 18th annual ACM symposium on User interface software and technology, pp. 33–42, 2005.
  40. B. Wang and T. Grossman. Blyncsync: enabling multimodal smartwatch gestures with synchronous touch and blink. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–14, 2020.
  41. Ripples: utilizing per-contact visualizations to improve user interaction with touch displays. In Proceedings of the 22nd annual ACM symposium on User interface software and technology - UIST ’09, p. 3. ACM Press, Victoria, BC, Canada, 2009. doi: 10 . 1145/1622176 . 1622180
  42. J. Williamson. Continuous Uncertain Interaction. University of Glasgow, Glasgow, Scotland, 2006.
  43. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems, pp. 143–146, 2011.
  44. Rids: Implicit detection of a selection gesture using hand motion dynamics during freehand pointing in virtual reality. In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST ’22. Association for Computing Machinery, New York, NY, USA, 2022. doi: 10 . 1145/3526113 . 3545701
Citations (1)

Summary

We haven't generated a summary for this paper yet.