Effortlessly and with great accuracy, humans can recognize objects and scenes using visual impressions. Computer vision tries to imitate these abilities by using a camera connected to a computer. However, for most tasks, it has turned out to be difficult to achieve similar performance as the human visual system. One such task is scene localisation which is addressed in this project.
There are numerous applications: robot navigation, human-computer interfaces (for example, aids for disabled) and multi-media applications (for example, image retrieval). Scene localisation is a key component of any autonomous system. Successful solutions have generally been achieved with laser, sonar or stereo vision range sensors. Traditional image-based methods have been limited to small-scale problems due to the lack of robust image descriptors which are discriminative enough to infer the camera position. The main difficulty is that small changes in illumination or viewpoint may drastically change the appearance of an image. We propose several novel ways of dealing with this which will enable scene localistion in large-scale environments.
Funded by the Swedish Research Council.
Principal Investigator: Fredrik Kahl
Last updated: 2012-01-13