Publications
Detailed Information
Learning Background Subtraction by Video Synthesis and Multi-scale Recurrent Networks
Cited 8 time in
Web of Science
Cited 10 time in Scopus
- Authors
- Issue Date
- 2019-12
- Publisher
- SPRINGER INTERNATIONAL PUBLISHING AG
- Citation
- COMPUTER VISION - ACCV 2018, PT VI, Vol.11366, pp.357-372
- Abstract
- This paper addresses the moving objects segmentation in videos, i.e. Background Subtraction (BGS) using a deep network. The proposed structure learns temporal associativity without losing spatial information by using the convolutional Long Short-Term Memory (LSTM). It learns the spatial relation by forming various-size spatial receptive fields through the various scale recurrent networks. The most serious problem in training the proposed network is that it is very difficult to find or make a sufficient number of pixel-level labeled video datasets. In order to overcome this limitation, we generate many training frames by combining the annotated foreground objects from some available datasets with the background of the target video. The contribution of this paper is to provide the first multi-scale recurrent networks for the BGS, which works well for many kinds of surveillance videos and provides the best performance in CDnet 2014 which is widely used for the BGS testing.
- ISSN
- 0302-9743
- Files in This Item:
- There are no files associated with this item.
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.