Flying Object Detection from a Single Moving Camera

We propose an approach to detect flying objects such as UAVs and aircrafts when they occupy a small portion of the field of view, possibly moving against complex backgrounds, and are filmed by a camera that itself moves.

Detecting a small drone against a complex moving background. (Left) It is almost invisible to the human eye and hard to detect from a single image. (Right) Yet, our algorithm can find it by using motion clues.

Solving such a difficult problem requires combining both appearance and motion cues. To this end we propose a regression-based approach to motion stabilization of local image patches that allows us to achieve effective classification on spatio-temporal image cubes and outperform state-of-the-art techniques.

As the problem is relatively new, we collected two challenging datasets for UAVs and Aircrafts, which can be used as benchmarks for flying objects detection and visionguided collision avoidance.

Motion Compensation

In order to be able to make use of temporal information and avoid bias coming from the variety of motions an aircraft could make, we propose a learning-based method to compensate for its motion. This aims to reduce the in-class variation of the data, used for training the detection framework.

no im

Compensation for the apparent motion of different flying objects inside the st-cube (sequence of patches) allows to decrease in-class variation of the data, used by the machine learning algorithms. For each st-cube, we also provide three graphs: The blue dots in the first graph indicate the locations of the center of the drone throughout the st-cube, the red cross indicates the patch center. The next two graphs plot the variations of the x  and y  coordinates of the center of the drone respectively, compared to the position of the center of the patch. We can see that our method keeps the drone close to the center even for complicated backgrounds and when the drone is barely recognizable as in the right column.

Results

Applying CNN-based motion compensation to the patches

In each pair

  • Left image: Illustrates the original patch, given to the CNN regressor
  • Right image: The resulting patch, after motion compensation is applied
UAV examples Aircraft examples

Detection results on the video from the UAV dataset

We can see that the appearance of the object changes a lot, base on various illumination changes and noise, coming from the background. Nevertheless, our approach is able to detect it in most cases.


Detection results on the video from the Aircraft dataset with scale adjustment

  • Top left: Scale and motion adjusted detection of the aircraft in a video sequence.
  • Top right: Projection of the points of the 3D trajectory throughout the previous 20 frames to the image plane.
  • Bottom left: Changes of the distance to the aircraft (provided size of the object is known and the camera is calibrated).
  • Bottom right: Trajectory of the object in 3D space is quite smooth due to the motion compensation algorithm, while neither traking nor additional smoothing is applied.

Here we use CNN regressor for adjustment of both spatial position and scale of the aircraft, which leads to precise localization in 3D space. We can see that the algorithm allows to correctly estimate the size of the object in the most of the cases. This allows the whole framework to detect aircraft at broad variety of scales.

Datasets

Data and Code is available https://drive.switch.ch/index.php/s/3b3bdbd6f8fb61e05d8b0560667ea992 and here.

Please contact Artem Rozantsev, should you have any questions on how to use the code.

References

Detecting Flying Objects using a Single Moving Camera

A. Rozantsev; V. Lepetit; P. Fua 

Transactions on Pattern Analysis and Machine Intelligence (PAMI). 2017. Vol. 39, p. 879-892. DOI : 10.1109/TPAMI.2016.2564408.

Flying Objects Detection from a Single Moving Camera

A. Rozantsev; V. Lepetit; P. Fua 

2015. Conference on Computer Vision and Pattern Recognition (CVPR), Boston, Massachusetts, USA, June 7-12, 2015. p. 4128-4136. DOI : 10.1109/CVPR.2015.7299040.