Tom Drummond

I am a professor at the Department of Electrical and
Computer Systems Engineering at Monash University


Research Topics:

Updated notes on Lie groups

The notes on Lie groups have been updated.  The theory chapter has been split into two chapters for Lie groups and projective geometry.  Sections on the Lie bracket and conics have been added.  Notes available here.

Australian Centre for Robotic Vision

The Australian Research Council have just awarded us $19M to establish a Centre of Excellence in Robotic Vision.  The centre will address some of the key challenges in enabling robots to use vision to operate in unstructured and dynamic environments alongside humans.

[ARC Announcement]

A Unified Rolling Shutter and Motion Blur Model for 3D Visual Registration (with Maxime Meilland and Andrew Comport)

This paper shows how rolling shutter and motion blur can be handled in a unified way within a SLAM framework by computing the predicted appearance (and derivatives) of each row in the image based on its individual exposure duration and 6 DoF motion.

[ICCV 2013 paper]

Multiview Image Compression and Transmission Techniques in Wireless Multimedia Sensor Networks: A Survey (with Max Wang and Ahmet Sekercioglu)

This paper presents a survey of recent research works on multiview image compression and transmission techniques developed for Wireless Multimedia Sensor Networks (WMSNs). We classify them into two categories with respect to the coding methods adopted: (i) in-network processing with joint coding schemes, and (ii) distributed source coding schemes. The survey also includes a comprehensive evaluation of the limitations of each approach.

[ICDSC 2013 paper]

A Real-Time Distributed Relative Pose Estimation Algorithm for RGB-D Camera Equipped Visual Sensor Networks (with Max Wang and Ahmet Sekercioglu)

In this paper, we present a distributed, peer-to-peer algorithm for relative pose estimation in a network of mobile robots equipped with RGB-D cameras acting as a visual sensor network.  Our algorithm uses the depth information to estimate the relative pose of a robot when camera sensors mounted on different robots observe a common scene from different angles of view.

[ICDSC 2013 paper]

Reduced dimensionality EKF for SLAM (with Dinesh Gamage)

This paper presents a method for reducing the computational complexity of Kalman Filters where large numbers of dimensions have no process noise (e.g. in SLAM).  The method reduces the dimensionality of the filter by removing dimensions which have been accurately measured, retaining just the unknown dimensions in the filter.

[BMVC 2013 paper]

An Iterative 5-pt Algorithm for Fast and Robust Essential Matrix Estimation (with Vincent Lui)

This paper presents a novel algorithm for calculating epipolar geometry from 5 correspondences. The algorithm directly solves for the orientation of each camera relative to the baseline that separates them and is able to impose the half plane constraints that arise from the requirement that visible landmarks must be in front of both cameras. The algo- rithm is conceptually simple, and provides numerically stable solutions that are used as a hypothesis generator within RANSAC. It is significantly faster than existing methods and is comfortably able to provide frame-rate performance on real data.

[2103 BMVC paper]

Algorithmic methodologies for FPGA-based vision (with Yoong Kang Lim and Lindsay Kleeman)

This paper presents a methodology for developing computer vision algorithms for FPGAs and shows how the methodology can be implemented in two case studies.

[2012 Machine Vision and Applications paper]

Distributed visual processing for augmented reality (with Winston Yii and Wai Ho Li)

This paper presents a system which combines smartphones with networked infrastructure and fixed sensors and shows how these elements can be combined to deliver real-time augmented reality.  We use a Kinect to generate dynamic trackable models of the environment as it changes at video frame rate.

[ISMAR 2012 paper]

Corner matching refinement for monocular pose estimation (with Dinesh Gamage)

Many tasks in computer vision rely on accurate detection and matching of visual landmarks (e.g. image corners) between two images.  This paper presents a method for refining the coordinates of correspondences directly.  Thus given some coordinates in the first image, our goal is to maximise the accuracy of the estimate of the coordinates in second image corresponding to the same real world point without being too concerned about which real world point is being matched.

[Draft BMVC 2012 paper]

Transformative Reality: Improving bionic vision with robotic sensing (with Dennis Lui, Damien Browne, Lindsay Kleeman and Wai Ho Li)

Implanted visual prostheses provide bionic vision with very low spatial and intensity resolution when compared against healthy vision. Vision processing can make better use of the limited resolution by highlighting salient features such as edges. In this paper, we show how Transformative Reality extends and improves upon traditional vision processing in three ways.

[EMBC 2012 paper]

Robust egomotion estimation using ICP in inverse depth coordinates (with Dennis Lui, Titus Tang and Wai Ho Li)

This paper presents a 6 degrees of freedom egomotion estimation method using Iterative Closest Point (ICP) for low cost and low accuracy range cameras. Instead of Euclidean coordinates, the method uses inverse depth coordinates which better conforms to the error characteristics of raw sensor data. Extensive experiments were performed to evaluate different combinations of error metrics and parameters. The result is a real-time system that is accurate and robust across a variety of motion trajectories.

Visual localisation of a robot with an external RGBD sensor (with Winston Yii, Nalika Damayanthi and Wai Ho Li)

This paper presents a novel approach to visual localisation that uses a camera on the robot coupled wirelessly to an external RGB-D sensor. Unlike systems where an external sensor observes the robot, our approach merely assumes the robots camera and external sensor share a portion of their field of view. Experiments were performed using a Microsoft Kinect as the external sensor and a small mobile robot. The robot carries a smartphone, which acts as its camera, sensor processor, control platform and wireless link. Computational effort is distributed between the smartphone and a host PC connected to the Kinect. Experimental results show that the approach is accurate and robust in dynamic environments with substantial object movement and occlusions.  This work won the best student paper prize at ACRA 2011.
[ACRA 2011 paper]

eBug - an open robotics platform for teaching and research (with Nick D'Ademo, Dennis Lui, Wai Ho Li and Ahmet Sekercioglu)

The eBug is a low-cost and open robotics platform designed for undergraduate teaching and academic research in areas such as multimedia smart sensor networks, distributed control, mobile wireless communication algorithms and swarm robotics. The platform is easy to use, modular and extensible.

[ACRA 2011 paper]