Publications

Detailed Information

Matching Video Net: Memory-based embedding for video action recognition

Cited 4 time in Web of Science Cited 6 time in Scopus
Authors

Kim, Daesik; Lee, Myunggi; Kwak, Nojun

Issue Date
2017
Publisher
IEEE
Citation
2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), pp.432-438
Abstract
Most of recent successful researches on action recognition are based on deep learning structures. Nonetheless, training deep neural networks is notorious for requiring huge amount of data. On the other hand, not enough data can lead to an overfitted model. In this work, we propose a novel model, matching video net (MVN), which can be trained with a small amount of data. In order to avoid the problem of overfitting, we use a non-parametric setup on top of parametric networks with external memories. An input clip of video is transformed into an embedding space and matched to the memorized samples in the embedding space. Then, the similarities between the input and the memorized data are measured to determine the nearest neighbors. We perform experiments in a supervised manner on action recognition datasets, achieving state-of-the-art results. Moreover, we applied our model to one-shot learning problems with a novel training strategy. Our model achieves surprisingly good results in predicting unseen action classes from only a few examples.
ISSN
2161-4393
URI
https://hdl.handle.net/10371/206804
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • Graduate School of Convergence Science & Technology
  • Department of Intelligence and Information
Research Area Feature Selection and Extraction, Object Detection, Object Recognition

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share