Evaluating Body-Referenced Graphical Menus in Virtual Environments

I conducted a user study with 24 participants to evaluate a variety of graphical menu placements (spatial, arm, hand, waist) in a virtual environment as well as its menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze).

Interactive Systems & User Experience Research Cluster
September 2019 — December 2019

/

Problem Statement

In virtual environment, body-referenced graphical menus can appear at various body parts, such as users’ hands, arms, upper legs, or abdomen. Such menus are still emerging in the field of virtual reality, and there is a lack of usability studies that would provide design guidelines for developers to implement. This project attempted to close this gap and to provide a comprehensive evaluation of body-referenced graphical menus in virtual environments.

Example of a body-referenced graphical menu. Source: Leap Motion

/

Methods

We conducted a user study with 24 participants (20 males and 4 females, 18-30 years old) to evaluate a variety of graphical menus placements (spatial, arm, hand, waist) in a virtual environment as well as its menu shapes (linear and radial), and selection techniques (ray-casting with a controller device, head, and eye gaze).

/

Study Procedure

First, the participants were then given a demographic survey that collected general information about the participant (age, gender, ethnicity, dexterity).

After that, the participant was guided on how to position their headset and adjust the interpupillary distance (to align the lenses with the distance between the participant’s pupils) for the best visual experience.

Next, the participant conducted a training session of 5-10 minutes in order to get familiar with the virtual environment and the different types of graphical menus including menu placements and selection techniques. The system displayed a message with the number of the book from 1 to 6 selected randomly. The participant needed to select the corresponding item from the menu.

Once the training session was completed, the participant started the study session to further evaluate various combinations of the graphical menus.

At the end, the participant filled out a post-questionnaire using a 7-point Likert scale and ranked them based on the overall preference, how mentally or physically demanding the placement menus were, frustration, and its ease of use.

Sample screenshots from the virtual environment. We compared spatial, arm, hand, and waist menu placements.

/

Deliverables

Our results indicated that the spatial, hand, and waist menus are significantly faster than the arm menus, and the eye gaze selection technique is more prone to errors and has a significantly higher number of target re-entries than the other selection techniques. Additionally, we found that a significantly higher number of participants ranked the spatial graphical menus as their favorite menu placement and the arm menu as their least favorite one.

We also provided design recommendations for implementing body-referenced graphical menus in virtual environments:

‍Placing a graphical menu in the virtual world was the most favored among participants, we suggest using the spatial graphical menu with ray-casting and any menu shapes.‍

Placing a graphical menu on a hand, we suggest using the hand menu in conjunction with a radial shape and the ray-casting selection technique.‍

Attaching a graphical menu to a waist was significantly faster than the arm graphical menu, the best interaction technique is the linear shape (based on the participants’ preference) with the ray-casting selection technique.

Placing a graphical menu on an arm took a significantly longer time to complete the menu task in comparison with the spatial, hand, and waist placement menus, was the least favored among participants, felt more intuitive when it was combined with a radial shape or linear shape and eye gaze selection technique.

/

Final Thoughts

It was the first VR project that I worked collaboratively with other researchers from the Interactive Computing Experiences Research Cluster at the University of Central Florida. During the project, I learned how to conduct virtual reality studies as well as how to work with eye-tracking and body-tracking technology and successfully integrate it into virtual reality. At the end of the project, I published the findings to the Graphics Interface 2020 Conference.