3D Tracking in extreme environments
Reliable 3D tracking of poorly textured, specular objects is a very challenging task. This is a clear obstacle to the development of Robotics and Augmented Reality applications in industrial environments, where such objects can typically be found.
Our approach allows to register input frames captured by a standard, low-quality monocular camera in such extreme environments: we assume that a small set of reference images and a partial 3D model of the environment are available, and then register input images by aligning them with one of the reference images using the 3D information.
Dense alignment is attractive because it globally exploits most of the image information, even when local image features such as interest points or edges are ambiguous. However, it typically relies directly on image intensities, which is prone to fail in presence of non-Lambertian effects such as specularities, or when the objects do not exhibit convenient textures. Moreover, a multi-scale approach is typically required for robust alignment, where low-pass filters are applied to the signals to align. When the signals are the image intensities, or a linear combination of them, low-pass filtering rapidly deteriorates information.
We introduce a more robust local descriptor in place of the image intensities, that we refer to as “Descriptor Fields” and that resolves these issues.
Our descriptor is computed from a small set of convolutional filters applied to the images, which makes it suitable for real-time applications. However, instead of relying on the simple linear transformation of the intensity signal issued by the convolutions, we apply a non-linear operation that separates the descriptors’ positive values from the negative ones. This keeps Descriptor Fields discriminant even after low-pass filtering, such that large Gaussian kernels can be used to significantly broaden the region of convergence of the alignment optimization algorithms and enhance robustness.
The code of our ISMAR 2014 demo is publicy available on github.
The data we employed for our experiments can be downloaded from the dataset webpage.
This project is supported in part by the EDUSAFE European project.