Reinforcement learning for visual object detection

Research output: Chapter in Book/Report/Conference proceedingPaper in conference proceeding


One of the most widely used strategies for visual object detection is based on exhaustive spatial hypothesis search. While methods like sliding windows have been successful and effective for many years, they are still brute-force, independent of the image content and the visual category being searched. In this paper we present principled sequential models that accumulate evidence collected at a small set of image locations in order to detect visual objects effectively. By formulating sequential search as reinforcement learning of the search policy (including the stopping condition), our fully trainable model can explicitly balance for each class, specifically, the conflicting goals of exploration - sampling more image regions for better accuracy-, and exploitation - stopping the search efficiently when sufficiently confident about the target's location. The methodology is general and applicable to any detector response function. We report encouraging results in the PASCAL VOC 2012 object detection test set showing that the proposed methodology achieves almost two orders of magnitude speed-up over sliding window methods.


External organisations
  • University of Toronto
  • Institute of Mathematics of the Romanian Academy
Research areas and keywords

Subject classification (UKÄ) – MANDATORY

  • Computer Vision and Robotics (Autonomous Systems)
Original languageEnglish
Title of host publication2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
PublisherIEEE Computer Society
Number of pages9
ISBN (Electronic)9781467388511
Publication statusPublished - 2016
Publication categoryResearch
Event2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016 - Las Vegas, United States
Duration: 2016 Jun 262016 Jul 1


Conference2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016
CountryUnited States
CityLas Vegas