Publications

Detailed Information

Training Multi-Object Detector by Estimating Bounding Box Distribution for Input Image

Cited 0 time in Web of Science Cited 5 time in Scopus
Authors

Yoo, Jaeyoung; Lee, Hojun; Chung, Inseop; Seo, Geonseok; Kwak, Nojun

Issue Date
2021-10
Publisher
IEEE
Citation
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), pp.3417-3426
Abstract
In multi-object detection using neural networks, the fundamental problem is, "How should the network learn a variable number of bounding boxes in different input images?". Previous methods train a multi-object detection network through a procedure that directly assigns the ground truth bounding boxes to the specific locations of the network's output. However, this procedure makes the training of a multi-object detection network too heuristic and complicated. In this paper, we reformulate the multi-object detection task as a problem of density estimation of bounding boxes. Instead of assigning each ground truth to specific locations of network's output, we train a network by estimating the probability density of bounding boxes in an input image using a mixture model. For this purpose, we propose a novel network for object detection called Mixture Density Object Detector (MDOD), and the corresponding objective function for the density-estimation-based training. We applied MDOD to MS COCO dataset. Our proposed method not only deals with multi-object detection problems in a new approach, but also improves detection performances through MDOD.
ISSN
1550-5499
URI
https://hdl.handle.net/10371/205608
DOI
https://doi.org/10.1109/ICCV48922.2021.00342
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • Graduate School of Convergence Science & Technology
  • Department of Intelligence and Information
Research Area Feature Selection and Extraction, Object Detection, Object Recognition

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share