We show how to train a Convolutional Neural Network to assign a canonical orientation to feature points given an image patch centered on the feature point. Our method improves feature point matching upon the state-of-the-art and can be used in conjunction with any existing rotation sensitive descriptors. To avoid the tedious and almost impossible task of finding a target orientation to learn, we propose to use Siamese networks which implicitly find the optimal orientations during training. We also propose a new type of activation function for Neural Networks that generalizes the popular ReLU, maxout, and PReLU activation functions. This novel activation performs better for our task. We validate the effectiveness of our method extensively with four existing datasets, including two non-planar datasets, as well as our own dataset. We show that we outperform the state-of-the-art without the need of retraining for each dataset.
This teaser video shows image matching and Multi-View Stereo (MVS) application results with our orientation assignments. For all results, we use the same Edge Foci (EF) features points, only varying in orientation assignments. For image matching, we first show the image patches of the target image rotated back to the orientation of the reference image with SIFT and our orientations, and their respective errors. We then show the resulting homographies with VGG descriptors. For MVS, we show the 3D models generated with and without our orientation assignments using Daisy and VGG descriptors. Please see paper for details.
Click the following link for the supplementary appendix for implementation details.
Dataset and Codes
Datasets used in the paper.
Codes for testing the learned orientation estimator
Codes for the evaluation framework