Publications

Detailed Information

Future video synthesis with object motion prediction

Cited 38 time in Web of Science Cited 52 time in Scopus
Authors

Wu, Yue; Gao, Rongrong; Park, Jaesik; Chen, Qifeng

Issue Date
2020
Publisher
IEEE
Citation
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.5538-5547
Abstract
We present an approach to predict future video frames given a sequence of continuous video frames in the past. Instead of synthesizing images directly, our approach is designed to understand the complex scene dynamics by decoupling the background scene and moving objects. The appearance of the scene components in the future is predicted by non-rigid deformation of the background and affine transformation of moving objects. The anticipated appearances are combined to create a reasonable video in the future. With this procedure, our method exhibits much less tearing or distortion artifact compared to other approaches. Experimental results on the Cityscapes and KITTI datasets show that our model outperforms the state-of-the-art in terms of visual quality and accuracy.
ISSN
1063-6919
URI
https://hdl.handle.net/10371/201307
DOI
https://doi.org/10.1109/CVPR42600.2020.00558
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Engineering
  • Dept. of Computer Science and Engineering
Research Area Computer Graphics, Computer Vision, Machine Learning, Robotics

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share