I am the Melbourne Connect Chair of Digital Innovation for Society
in the School of Computing and Information Systems at the
University of Melbourne

email: tom.drummond@unimelb.edu.au


Research Topics:

High speed feature matching (with Simon Taylor and Ed Rosten)

This work presents a novel local feature matching method designed with a focus on runtime speed. This enables frame-rate localisation of known targets on low-powered devices such as mobile phones. This work won the Best Demo prize at CVPR 2009.

[2009 CVPR Workshop on Feature Detectors and Descriptors Paper]
[2009 BMVC Paper]
[Video showing operation]
[Video showing target with few features]
[Video showing multiple targets]

Vision-based augmented reality applications require a mechanism to localise known targets. We propose a local-feature based approach to the problem where runtime performance has been prioritised, enabling frame-rate performance on low-powered mobile devices such as smartphones.
A training phase is employed to identify the most repeatably detected interest points from a particular range of viewpoints. Runtime matching is based on very heavily quantised patches taken from around the interest point. The training phase allows us to learn a model for the variation of the quantised patches around each feature. Our feature model uses independent histograms of quantised intensity for each pixel in the patch. The histogram bin frequencies are then quantised to a single bit, which allows the representation of the feature model to be stored in just 40 bytes and permits a very fast error score to be computed between database features and query patches using bitwise operations.
Independent sets of features are learnt from different viewpoints to enable localisation over a large range of input views. A simple indexing scheme reduces the number of comparisons required at runtime.
This work is sponsored by The Boeing Company.
My contact details can be found on my home page.

CVPR Best Demo 2009

The demo presented at CVPR 2009, Robust Feature Matching in 2.3µs, was awarded one of the "Best Demo" awards at the awards session!

Publications (most recent first)

Multiple Target Localisation at over 100 FPS

Authors: Simon Taylor and Tom Drummond
Date: September 2009
Publication: British Machine Vision Conference

Bibtex

@inproceedings{taylor_2009_multiple,
    title       =    "Multiple Target Localisation at over 100 FPS",
    author      =    "Simon Taylor and Tom Drummond",
    booktitle   =    "British Machine Vision Conference",
    year        =    "2009",
    month       =    "September",
    url         =    "http://mi.eng.cam.ac.uk/~sjt59/papers/taylor_2009_multiple.pdf",
    notes       =    "Oral presentation",
}

Abstract

This paper presents a method for fast feature-based matching which enables 7 independent targets to be localised in a video sequence with an average total processing time of 7.46ms per frame. We extend recent work on fast matching using Histogrammed Intensity Patches (HIPs) by adding a rotation invariant framework and a tree-based lookup scheme. Compared to state-of-the-art fast localisation schemes we achieve better matching robustness in under a quarter of the computation time and requiring 5-10 times less memory.

Downloads

Paper: PDF | PS.GZ | PS.BZ2
Extended Abstract: PDF | PS.GZ | PS.BZ2
Video (.mp4 - works with VLC): Multiple Target Localisation

Robust Feature Matching in 2.3µs

Authors: Simon Taylor, Edward Rosten and Tom Drummond
Date: June 2009
Publication: IEEE CVPR Workshop on Feature Detectors and Descriptors: The State Of The Art and Beyond

Bibtex

@inproceedings{taylor_2009_robust,
    title       =    "Robust feature matching in 2.3$\mu$s",
    author      =    "Simon Taylor and Edward Rosten and Tom Drummond",
    booktitle   =    "IEEE CVPR Workshop on Feature Detectors and Descriptors: The State Of The Art and Beyond",
    year        =    "2009",
    month       =    "June",
    url         =    "http://mi.eng.cam.ac.uk/~sjt59/papers/taylor_2009_robust.pdf",
}

Abstract

In this paper we present a robust feature matching scheme in which features can be matched in 2.3µs. For a typical task involving 150 features per image, this results in a processing time of 500µs for feature extraction and matching. In order to achieve very fast matching we use simple features based on histograms of pixel intensities and an indexing scheme based on their joint distribution. The features are stored with a novel bit mask representation which requires only 44 bytes of memory per feature and allows computation of a dissimilarity score in 20ns. A training phase gives the patch-based features invariance to small viewpoint variations. Larger viewpoint variations are handled by training entirely independent sets of features from different viewpoints.
A complete system is presented where a database of around 13,000 features is used to robustly localise a single planar target in just over a millisecond, including all steps from feature detection to model fitting. The resulting system shows comparable robustness to SIFT and Ferns while using a tiny fraction of the processing time, and in the latter case a fraction of the memory as well.

Downloads

Paper: PDF | PS.GZ | PS.BZ2
Slides from talk: OpenOffice.org (original) | Powerpoint | PDF
Reorganised slides used during demos: OpenOffice.org (original) | Powerpoint | PDF
Videos (mpeg4 AVIs): Details | Target with little texture

No comments:

Post a Comment

Note: only a member of this blog may post a comment.