Augmented Reality Map Navigation with Freehand Gestures

Kadek Ananta Satriadi, Barrett Ens, Maxime Cordeil, Tobias Czauderna, Wesley Willett, Bernhard Jenny

IEEE Virtual Reality and 3D User Interfaces. Osaka, Japan. 2019.

Mid-air hand gesture interaction has long been proposed as a ‘natural’ input method for Augmented Reality (AR) applications, yet has been little explored for intensive applications like multiscale navigation. In multiscale navigation, such as digital map navigation, pan and zoom are the predominant interactions. A position-based input mapping (e.g. grabbing metaphor) is intuitive for such interactions, but is prone to arm fatigue. This work focuses on improving digital map navigation in AR with mid-air hand gestures, using a horizontal intangible map display. First, we conducted a user study to explore the effects of handedness (unimanual and bimanual) and input mapping (position-based and rate-based). From these findings we designed DiveZoom and TerraceZoom, two novel hybrid techniques that smoothly transition between position- and rate-based mappings. A second user study evaluated these designs. Our result indicates that the introduced input-mapping transitions can reduce perceived arm fatigue with limited impact on performance.

DOI: https://doi.org/10.1109/VR.2019.8798340

Preprint: download

Presentation: watch

Slides: download

Tags: augmented reality,gesture,ieeevr,maps

Motivation

The recent commercialisation of wearable Augmented Reality (AR) hardware provides new opportunities to support human data understanding in applications such as immersive map browsing. However, there are currently no standard interaction methods for wearable AR, especially for navigation-intensive applications such as map browsing. The predominant interactions for digital map browsing are panning and zooming, so it is essential to develop efficient multiscale navigation methods for this platform.

This work focuses on improving multiscale navigation in the context of horizontally situated interactive maps, viewed using wearable AR. With the aim of extending these developments in future to collaborative interaction with 3D maps, we begin this exploration with single-user interaction with 2D maps, using a wideFoV, video see-through AR platform. While in-air gestures have been explored for large wall displays and mobile devices, these previous developments use large arm motions or small finger movements that do not necessarily transfer directly to wearable AR and horizontal maps.

The figure above illustrates our motivation to position maps horizontally. A horizontal layout allows for an intimate collaborative setting, where users can gather around a map in a face-to-face discussion. Unlike large wall displays, a horizontal orientation allows users to reorient their viewpoint by walking around a floor or table layout. Furthermore, when 3D terrain or structures are shown, a horizontal layout reveals these details in their correct orientation. This requirement is particularly important for geospatial data exploration where the third dimension can be used to show geospatial phenomena, such as atmospheric data, ground water modeling, city modeling, and spatio-temporal data visualisation.

Multiscale navigation with panning and zooming on 2D desktop with mouse input does not cause fatigue due to the use of small muscle groups in the hand and fingers.

Input Mapping

In augmented reality setting with freehand gesture input, the arm movement is mapped to panning and zooming actions. We identify two type of input mapping for unimanual hand gestures: indirect grab, joystick. Indirect grab is akin to pinching the map then moving it vertically (zooming) or horizontally (panning). This input mapping is intuitive but could cause fatigue due to clutching.

Joystick input mapping is a rate-based approach that allows continuous panning or zooming action with minimum clutching. Although this technique is expected to reduce arm fatigue, it could be slow and hard to learn.

Hybrid Input Mapping

We designed a freehand gesture technique that can perform short and long-distance navigation with minimum fatigue. Our approach is by combining indirect grab and joystick input mapping using ellipsoid metaphor. Arm movement performed inside the ellipsoid is transferred to indirect grab, allowing fast short-distance navigation. Long-distance navigation can be performed by moving arm outside of the ellipsoid volume.

We also introduce dual cones volume to separate panning only and panning + zooming action to minimise unintended panning action due to the nature of arm movement that tend to move in an oblique trajectory when performing panning action.

Dive Zoom

Our successful hybrid technique is dive zoom. This technique allows transition from indirect grab to an integrated and contiguous joystick panning + zooming. The video bellow illustrates how zooming in is performed using dive zoom technique. First, the user starts pinching to initiate the action, then the ellipsoid is created around the hand.

The user then moves the hand down until the joystick input mapping is triggered. While continuously zooming in, the user can steer the direction by moving the hand horizontally.

Read our paper to know more about the design process and evaluation.

Satriadi, K. A., Ens, B., Cordeil, M., Czauderna T. , Willett, W., Jenny, B. (2019, March). Augmented reality map navigation with freehand gestures. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) (pp. 593-603). IEEE. doi:10.1109/VR.2019.8798340