Localisation and Interaction for Augmented Maps (with Ethan Eade and Tom Drummond)

Paper-based cartographic maps provide highly detailed information visualization with unrivaled fidelity and information density. Moreover, the physical properties of paper afford simple interactions for browsing a map or focusing on individual details, managing concurrent access for multiple users and general malleability. However, printed maps are static displays and while computer-based map displays can support dynamic information, they lack the nice properties of real maps identified above. We address these shortcomings by presenting a system to augment printed maps with digital graphical information and user interface components. These augmentations complement the properties of the printed information in that they are dynamic, permit layer selection and provide complex computer mediated interactions with geographically embedded information and user interface controls. Two methods are presented which exploit the benefits of using tangible artifacts for such interactions.

[ISMAR 2005] [video] [presentation]

(I am reposting old content to keep it online)

Image-based X-ray visualization techniques for spatial understanding in Outdoor Augmented Reality (with Stefanie Zollmann, Raphael Grasset and Tobias Langlotz)

This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. Here, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a transparency value to control the overlay in the AR view. In this study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorporates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.

[OZCHI 2014]

A Minimal Solution to the Generalized Pose-and-Scale Problem (with Jonathan Ventura, Clemens Arth and Dieter Schmalstieg)

We propose a novel solution to the generalized camera pose problem which includes the internal scale of the generalized camera as an unknown parameter. While a well-calibrated camera rig has a fixed and known scale, camera trajectories produced by monocular motion estimation necessarily lack a scale estimate. Thus, when performing loop closure in monocular visual odometry, or registering separate structure-from-motion reconstructions, we must estimate a seven degree-of-freedom similarity transform from corresponding observations. Our approach handles general configurations of rays and points and directly estimates the full similarity transformation from the 2D-3D correspondences. The minimal solver can be used in a hypothesize-and-test architecture for robust transformation estimation and produces a least-squares estimate in the overdetermined case.

FlyAR: Augmented Reality Supported Micro Aerial Vehicle Navigation (with Stefanie Zollmann, Christof Hoppe and Tobias Langlotz)

Micro aerial vehicles equipped with high-resolution cameras can be used to create aerial reconstructions of construction sites and similar areas of interest. While automatic flight path planning and autonomous flying is often used, it cannot fully replace the human in the loop to avoid collisions and obstacles. In this paper, we present Augmented Reality supported navigation and flight planning for micro aerial vehicles by augmenting the user’s view with flight plan information and live feedback for flight supervision. 

Global Localization from Monocular SLAM on a Mobile Phone (with Jonathan Ventura, Clemens Arth, Dieter Schmalstieg)

This paper proposes to combine a keyframe-based monocular SLAM system with a global localization method. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.

Augmented Reality for Construction Site Monitoring and Documentation (with Stefanie Zollmann, Christof Hoppe, Stefan Kluckner, Christian Poglitsch and Horst Bischof)


In this paper, we describe how to use AR to support monitoring and documentation of construction site progress. We present an approach that uses aerial 3-D reconstruction to automatically capture progress information and a mobile AR client for on-site visualisation. We describe in detail how to capture 3-D, how to register the AR system within the physical outdoor environment, how to visualise progress information in a comprehensible way in an AR overlay, and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry.

Calibrating Setups with a Single-Point Laser Range Finder and a Camera (with Thanh Nguyen)

In this work, we propose two accurate calibration methods for determining the position and direction of the laser range finder with respect to the camera. Notably, we can determine the full calibration, even without observing the laser range finder observation point in the camera image. We evaluate both methods on synthetic and real data demonstrating their efficiency and good behavior under noise.

[IROS 2013 paper]

Mobile Interactive Hologram Verification (with Andreas Hartl, Jens Grubert and Dieter Schmalstieg)

We present an interactive application for mobile devices that integrates the recognition of the documents with the interactive verification of view-dependent security elements such as holograms or watermarks. The system recognizes and tracks the paper document, provides user guidance for view alignment and presents a stored image of the element’s appearance depending on the current view of the document also recording user decisions. We describe how to model and capture the underlying spatially varying BRDF representation of view-dependent elements.

[ISMAR 2013 paper]

CONSTRUCT

The CONSTRUCT project is a collaboration between TU Graz and Siemens Corporation and funded by the Austrian Research Promotion Agency (FFG). The goal of the project is to develop methods for modeling and surveying large construction sites. The project will make use of unmanned aerial vehicles and existing stationary or pan-tilt zoom cameras at the construction site. The goal is to provide accurate 3D models on a regular basis of the whole site. This will generate a 4D data set (3D+time). This data can then be used for documentation, visualization (we will use a mobile augmented reality system to overlay e.g. the plan or a model of the building) as well as measurement (e.g., how much material has been transported).

[link to project page]