Publications

Detailed Information

Robust Subspace Learning and Clustering: Sparse and Low-Rank Representataions : 강인한 저차원 공간의 학습과 분류: 희소 및 저계수 표현

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

김은우

Advisor
오성회
Major
공과대학 전기·컴퓨터공학부
Issue Date
2017-02
Publisher
서울대학교 대학원
Keywords
subspace representationlow-rank representationsubspace learningcomputer vision
Description
학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 오성회.
Abstract
Learning a subspace structure based on sparse or low-rank representation has gained much attention and has been widely used over the past decade in machine learning, signal processing, computer vision, and robotic literatures to model a wide range of natural phenomena. Sparse representation is a powerful tool for high-dimensional data such as images, where the goal is to represent or compress the cumbersome data using a few representative samples. Low-rank representation is a generalization of the sparse representation in 2D space. Behind the successful outcomes, many efforts have been made for learning sparse or low-rank representation effciently. However, they are still ineffcient for complex data structures and lack robustness under the existence of various noises including outliers and missing data, because many existing algorithms relax the ideal optimization problem to a tractable one without considering computational and memory complexities. Thus, it is important to use a good representation algorithm which is effciently solvable and robust against unwanted corruptions. In this dissertation, our main goal is to learn algorithms with both robustness and effciency under noisy environments.
As for sparse representation, most of the optimization problems are relaxed to convex ones based on surrogate measures, such as the l1-norm, to resolve the computational intractability and high noise sensitivity of the original sparse representation problem based on the l0-norm. However, if the system at interest, other than the sparsity measure, is inherently nonconvex, then using a convex sparsity measure may not be the best choice for the problems. From this perspective, we propose desirable criteria to be a good nonconvex sparsity measure and suggest a corresponding family of measure. The proposed family of measures allows a simple measure, which enables effcient computation and embraces the benefits of both l0- and l1-norms, and most importantly, its gradient vanishes slowly unlike the l0-norm, which is suitable from an optimization perspective.
For low-rank representation, we first present an effcient l1-norm based low-rank matrix approximation algorithm using the proposed alternating rectified gradient methods to solve an l1-norm minimization problem, since conventional algorithms are very slow to solve the l1-norm based alternating minimization problem. The proposed methods try to find an optimal direction with a proper constraint which limits the search domain to avoid the diffculty that arises from the ambiguity in representing the two optimization variables. It is extended to an algorithm with an explicit smoothness regularizer and an orthogonality constraint for better effciency and solve it under the augmented Lagrangian framework. To give more stable solution with flexible rank estimation in the presence of heavy corruptions, we present a new solution based on the elastic-net regularization of singular values, which allows a faster algorithm than existing rank minimization methods without any heavy operations and is more stable than the state-of-the-art low-rank approximation algorithms due to its strong convexity. As a result, the proposed method leads to a holistic approach which enables both rank minimization and bilinear factorization. Moreover, as an extension to the previous methods performing on an unstructured matrix, we apply recent advances in rank minimization to a structured matrix for robust kernel subspace estimation under noisy scenarios.
Lastly, but not least, we extend a low-rank approximation problem, which assumes a single subspace, to a problem which lies in a union of multiple subspaces, which is closely related to subspace clustering. While many recent studies are based on sparse or low-rank representation, the grouping effect among similar samples has not been often considered with the sparse or low-rank representation. Thus, we propose a robust group subspace clustering lgorithms based on sparse and low-rank representation with explicit subspace grouping. To resolve the fundamental issue on computational complexity of existing subspace clustering algorithms, we suggest a full scalable low-rank subspace clustering approach, which achieves linear complexity in the number of samples. Extensive experimental results on various applications, including computer vision and robotics, using benchmark and real-world data sets verify that our suggested solutions to the existing issues on sparse and low-rank representations are considerably robust, effective, and practically applicable.
Language
English
URI
https://hdl.handle.net/10371/119249
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share