Paper maps are much higher resolution than computerised maps and can be more readily manipulated. However, because they are physically printed they can only show static information. This work makes use of a camera-projector system to allow overlay of dynamic information on paper maps placed on a table surface. Tangible user interface tools are supported in a manner which allows multiple concurrent users to interact with the same map.
[2005 ISMAR Paper]
[2005 ISMAR Paper]
Paper-based cartographic maps provide highly detailed information visualization with unrivaled fidelity and information density. Moreover, the physical properties of paper afford simple interactions for browsing a map or focusing on individual details, managing concurrent access for multiple users and general malleability. However, printed maps are static displays and while computer-based map displays can support dynamic information, they lack the nice properties of real maps identified above. We address these shortcomings by presenting a system to augment printed maps with digital graphical information and user interface components. These augmentations complement the properties of the printed information in that they are dynamic, permit layer selection and provide complex computer mediated interactions with geographically embedded information and user interface controls. Two methods are presented which exploit the benefits of using tangible artifacts for such interactions.
Click any images to see them at larger resolution.
Overview
The overall system centers around a table top environment where users work with maps (see Fig. 3.). One or more maps are spread out on a table or any other planar surface. A camera mounted above the table tracks the maps' locations on the surface and registers interaction devices placed on them. A projector augments the maps with projected information from overhead. Two localisation systems deal with localising the maps and the interaction devices on the table surface. They require use information on the distortion of the camera's view of the table surface provided by an additional calibration step at startup.
Localisation
The system stores a template image for each map. This set of template images is preprocessed to compute a set of features for each image. For multiple scales of the image Harris corners are extracted and a gradient histogram feature descriptor (similar to SIFT features) is computed and stored. The resulting descriptors are rotational invariant and robust against lightning changes. The featuer descriptors are stored in an approximate-nearest-neighbors (ANN) data structure, so that such template features resembling a feature detected at runtime can be quickly retrieved.
When a camera frame is processed at runtime, it is first rectified based on system calibration. Then features are computed as before.For each such runtime feature, the k approximate nearest neighbors are chosen from all template features of the same scale. Each correspondence is assigned a confidence value in the match derived from the euclidean distance between the feature descriptors in the correspondence.
A multi-stage process eliminates outliers from the set of correspondences. These stages include tests based on rotation, translation and RANSAC fitting of a common homography. After outlier elimination is complete, a homography is robustly fit to the remaining correspondences using reweighed least squares. If there are too few remaining correspondences to fit a homography, or the orthogonality of the resulting homography is too low, the image match is rejected. Otherwise, the runtime-to-template match, and its homography and correspondence set, is recorded for use by the application.
Localisation of interaction devices uses a Hough transform to find rectangles of known size in the image. Pixels with strong gradient information contribute to lines in the Hough space. Then a search through the strongest lines finds candidate rectangles by checking for parallelism, distance, orthogonality and support in the image. Candidate rectangles are then projected back into the image and fitted to actual pixel locations to provide the closest fit. The rectangles have peaks mounted on one side to allow pixel-accurate positioning of the hot spot on the map. For any candidate rectangle, the possible peak regions are searched in the image to estimate the location of the peak. Altogether, the following information is returned for each device: the rectangle bounding the device, the peak location, and the orientation on the table.
Interaction
The first interaction device lets the user quickly browse images that are associated with locations on the map. A rectangular image browser prop consisting of a white piece of cardboard with a black border is placed on the map. A pointer in the middle of one side of the rectangle is used to denote a specific location and orientation on the map. The white area on the prop itself is used to project the retrieved image. Both location and direction of the pointer influence the displayed image. A hysteresis function avoids flickering of images at locations which are close to two or more reference points. The direct display of the images enables seamless operation because both the query and the result are visible to the user within the same area. Users do not need to look to other screens beyond the table surface to see the hidden information.
A second interaction device provides control over entities referenced to map locations. A Windows CE based PDA device is located using the screen rectangle which appears almost black in the video image. Again a pointer is present on the top of the device to accurately determine a location. An active entity referenced to a location presents a dedicated user interface on the PDA using PAWS. Typically the user interface is persistent on the PDA until it is replaced by a new one. Therefore users can pick up the PDA from the table surface again and operate it in a more comfortable hand-held manner.
PAWS - Python Activated Windows System
We developed the Python Activated Windows System (PAWS) to enable the simple creation of remote user interfaces. The PDA is equipped with wireless networking functionality and communicates with the main system. The device runs an instance of PAWS, a CORBA service implemented in Python that accepts and executes Python scripts send to it by other processes. PAWS provides three basic functions either via a simple socket based interface or as CORBA services: upload of Python code modules as zip files, execution of Python scripts and execution of single Python commands. The latter allows interactive debugging of scripts running on the PDA using basic tools such as telnet.
Typically, an application creates a CORBA object providing services that represent the state and possible actions of an entity. This object is registered with the CORBA Naming service for simple lookup. When notified that the PDA is close, the entity sends the required code modules to it and executes a Python script that instantiates the user interface objects on the PDA. Such scripts create user interface elements and further CORBA clients and services (for registering callbacks) to communicate with the service provided by the application.
Results
We have implemented a flood control application for the city of Cambridge (UK) to demonstrate possible features of augmented maps. The River Cam running close to the town center of Cambridge regularly floods the surrounding areas, which are lower than the water level of the river in a number of cases. In the event of real flood, the water line needs to be monitored, threatened areas identified and response units managed. Information provided by local personnel helps to assess the situation. An augmented map provides the ideal frame for presenting and controlling all the relevant information in one place.
A map of the interesting area is augmented with an overlaid area representing the flooded land at a certain water level. Fig.6 shows details of a map of Cambridge overlaid with the current expansion of the River Cam. The overlay changes dynamically with the water level which is controlled by an operator on the PDA device. Certain endangered compounds are highlighted in red with an animated texture when the water level reaches a critical level.
Other information sources include images provided by ground personnel at various locations. Green icons represent the locations and directions of these images. Using the image browsing prop an operator can see the image and assess the local situation immediately. An emergency unit represented as a helicopter is visible on the map as well. By placing the PDA next to it, a corresponding graphical user interface appears on it to present more status information and give orders to the unit (see Fig.5). Here its direction and speed can be controlled. Another function of the PDA interface accesses web pages of relevant places on the map. Purple circles represent corresponding locations and placing the PDA next to them presents the associated web page on it.
Publications
- Gerhard Reitmayr, Ethan Eade and Tom Drummond
Localisation and Interaction for Augmented Maps
In Proc. IEEE ISMAR'05, October 5-8, 2005, Vienna, Austria. [BIBTEX] - Slides from the presentation at ISMAR'05
Media
A short video showing localisation of maps and the use of the different tools.
News items
- Augmented reality brings maps to life in New Scientist from 19 July 2005.
- Paper has its future mapped out in PC Pro Magazine from October 2005, p. 35.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.