Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fixed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we first ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-effort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with different costs to correct them. Our results provide valuable lessons about the trade-offs between UI automation levels, controllability, user agency, and the impact of prediction errors.
This work was done during my summer internship at Reality Labs Research, Meta in 2021. The paper has been conditionally accept at ACM CHI'22 in New Orleans, LA, USA.
Shout out to my internship mentor/manager Yan Xu and Sophie Kim, plus all the awesome collaborators at Reality Labs Research, including Mei Gao, Hiroshi Horii, Mark Parent, Peiqi Tang, Missie Smith, Michael Shvartsman, Nicci Yin, Ryan Gadz, Lia Martinez, Jonas Schmidtler, and many more. The work would not have been possible without their continuous guidance, feedback, and support. I am grateful for my time there, it was truly an amazing experience.
More details to come.
Back to Top