Publications

Detailed Information

Sparse Learning Models and Their Applications to Financial Technologies : 축약 학습 방법론과 금융 기술 문제에의 적용

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

손영두

Advisor
이재욱
Major
공과대학 산업·조선공학부
Issue Date
2015-08
Publisher
서울대학교 대학원
Keywords
clusteringactive learingsparse Bayesianfinancial technology
Description
학위논문 (박사)-- 서울대학교 대학원 : 산업·조선공학부, 2015. 8. 이재욱.
Abstract
As an era of big-data arises, the more efficient algorithms, in the sense of both time and storage, are required for data analysis. The sparse learning models satisfy these requirement, maintaining the ability of existing learning models to describe data distribution well. Therefore, the sparse learning models have been studied enormously from the middle of 2000s. Also, as developed data storage techniques have been applied to several business area, including finance, these sparse models obtain some possibilities to construct an accurate and efficient model compared to the existing parametric models.

In this dissertation, we developed two novel sparse learning models using a kernel method and the automatic relevance determination prior. Then, several learning models, including both sparse and non-sparse ones, are applied to two financial applications related to the financial technology,

The first developed model is a sparse support-based clustering model with a support function derived from the variance function of Gaussian process (GP) regression using automatic relevance determination prior and variable GP noise to overcome these clustering problems. The proposed method has a distinct feature that the support function is represented by a smaller number of representative vectors (center of kernels) than those of in previous studies. Another feature of the proposed method is that these representative vectors are in the training data set and are automatically located during the training process. Simulation result for various clustering problems show that the proposed method significantly reduces the labeling time. The exemplars of handwritten digit data sets selected using the proposed method are also reported.

The second model is an active learning algorithm for sparse Bayesian regression. Active learning is one of large and important branches in machine learning and it aims to build an accurate learning model with a relatively small number of labeled points which are chosen actively by the constructed learning model. Active learning algorithms are usually required when the cost of gaining labels of data points is expensive. We propose two sub-steps to construct the proposed algorithm. First, we develop a transductive and generalized version of relevance vector machine which obtains its basis vectors from the unlabeled data set as well as the labeled one. Next, we suggest three querying strategies which uses only the relevance vectors automatically selected by the developed model for active selection for data points to be labeled. The proposed method were applied to several artificial and real data sets and showed better performance than the benchmark, random selections, and these results were statistically significant in most cases.

As learning model applications to financial data, we pay attention to the predictions of two financial variables: the market impact costs and the credit default swap spreads. The first variable, the market impact cost, have not been analyzed by machine learning algorithms before and the learning application for the second variable has been rarely studied, but none of them applied several state-of-the-art learning models and compared the results among them.

For the prediction task of market impact cost, we applied two sparse learning models, support vector regression and relevance vector machine, and three non-sparse models, neural networks, Bayesian neural networks, and Gaussian process, to single transaction data of US equity market and compared their performances with one another and the benchmark parametric model. The active learning algorithm developed in chapter 4 was also applied to predict the market impact cost. As a result, the learning models except the support vector regression showed better performance than the parametric benchmark and the active learning algorithm performed better than the random selection with much lower number of labeled points than the full sparse Bayesian regression model.

For the prediction task of credit default swap spreads, we applied the same five learning models and also a parametric benchmark to daily credit default swap spreads from 2001 to 2014, which includes the global financial crisis period when the credit risk of firms were very high, and compared their performances with one another. Also in this application, support vector regression caused bad results especially when the credit risk is high. The relevance vector machines showed much better performances than the support vector regression but worse than the other non-sparse learning models.
Language
English
URI
https://hdl.handle.net/10371/118269
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share