Learning to Assign Orientations to Feature Points

Abstract

We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point.  Our method improves feature point matching upon the state-of-the-art and can be used in conjunction with any existing rotation sensitive descriptors.  To avoid the tedious and almost impossible task of finding a target orientation to learn, we propose to use Siamese networks which implicitly find the optimal orientations during training. We also propose a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions.  This novel activation performs better for our task.  We validate the effectiveness of our method extensively with four existing datasets, including two non-planar datasets, as well as our own dataset.  We show that we outperform the state-of-the-art without the need of retraining for each dataset.

References

Learning to Assign Orientations to Feature Points

K. M. Yi; Y. Verdie; P. Fua; V. Lepetit 

2016. Computer Vision and Pattern Recognition (CVPR), Las Vegas, Nevada, USA, June 26-July 1, 2016. p. 107-116. DOI : 10.1109/CVPR.2016.19.

Teaser Video

This teaser video shows image matching and Multi-View Stereo (MVS) application results with our orientation assignments. For all results, we use the same Edge Foci (EF) features points, only varying in orientation assignments. For image matching, we first show the image patches of the target image rotated back to the orientation of the reference image with SIFT and our orientations, and their respective errors. We then show the resulting homographies with VGG descriptors. For MVS, we show the 3D models generated with and without our orientation assignments using Daisy and VGG descriptors. Please see paper for details.

Supplementary material

Click the following link for the supplementary appendix for implementation details.