Publications

Detailed Information

Positive-definite correction of covariance matrix estimators via linear shrinkage : 선형 축소를 통한 공분산 행렬 추정량의 양정치 보정

DC Field Value Language
dc.contributor.advisor임요한-
dc.contributor.authorChoi, Young-Geun-
dc.date.accessioned2017-07-14T00:31:40Z-
dc.date.available2017-07-14T00:31:40Z-
dc.date.issued2015-08-
dc.identifier.other000000067255-
dc.identifier.urihttps://hdl.handle.net/10371/121155-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 통계학과, 2015. 8. 임요한.-
dc.description.abstractIn this paper, we study the positive definiteness (PDness) problem in covariance matrix estimation. For high dimensional data, the most common sample covariance matrix performs poorly in estimating the true matrix. Recently, as an alternative to the sample covariance matrix, many regularized estimators are proposed under structural assumptions on the true including sparsity. They are shown to be asymptotically consistent and rate-optimal in estimating the true covariance matrix and its structure. However, many of them do not take into account the PDness of the estimator and produce a non-PD estimate. Otherwise, additional regularizations (or constraints) are required on eigenvalues which make both the asymptotic analysis and computation much harder. To achieve the PDness, we propose a simple one-step procedure to update the regularized covariance matrix estimator which is not necessarily PD in finite sample. We revisit the idea of linear shrinkage (Stein, 1956-
dc.description.abstractLedoit and Wolf, 2004) and propose to take a convex combination between the first stage covariance matrix estimator (the regularized covariance matrix without PDness) and a given form of diagonal matrix. The proposed one-step correction, which we denote as LSPD (linear shrinkage for positive definiteness) estimator, is shown to preserve the asymptotic properties of the first stage estimator if the shrinkage parameters are carefully selected. In addition, it has a closed form expression and its computation is optimization-free, unlike existing sparse PD estimators (Rothman, 2012-
dc.description.abstractXue et al., 2012). The LSPD estimator is numerically compared with other sparse PD estimators to understand its finite sample properties as well as its computational gain. Finally, it is applied to two multivariate procedures relying on the covariance matrix estimator - the linear minimax classification problem posed by Lanckriet et al. (2002) and the well-known mean-variance portfolio optimization problem - and is shown to substantially improve the performance of both procedures.-
dc.description.tableofcontents1 Introduction 1
2 Literature review 6
2.1. Regularized covariance matrix estimators 6
2.1.1. Banding estimators 6
2.1.2. Thresholding estimators 8
2.2. Regularized precision matrix estimators 14
2.2.1. Penalized likelihood estimators 14
2.2.2. Penalized regression-based estimators 18
2.2.3. Other methods 21
2.3. Discussion: positive definiteness problem 22
2.3.1. Related works 24
3 The linear shrinkage for positive definiteness (LSPD) 27
3.1. Distance minimization 28
3.2. The choice of $\alpha$ 29
3.3. The choice of $\mu$ 30
3.4. Statistical properties of LSPD-estimator 33
3.5. If tuning parameter selection is involved for \widehat{\bf \Sigma} 35
3.6. Computation 37
3.7. Inaccurate calculation of smallest eigenvalue 38
3.7.1. PDness of ${\bf \Phi}(\widehat{\bf \Sigma})$ 38
3.7.2. Convergence rate 39
4 Simulation study 42
4.1. Data generation 42
4.2. Empirical risk 43
4.3. Computation time 49
5 Two applications 53
5.1. Liniar minimax classifier with application to speech recognition 54
5.1.1. Linear minimax probability machine 54
5.1.2. Example: Speech recognition 55
5.2. Markowitz portfolio optimization 57
5.2.1. Minimum-variance portfolio (MVP) allocation and short-sale 57
5.2.2. Example: Dow Jones stock return 59
6 Extension to other covariance matrix estimators: precision matrices 64
7 Concluding remarks 67
Bibliography 71
Appendix 78
Abstract (in Korean) 83
Acknowledgements (in Korean) 85
-
dc.formatapplication/pdf-
dc.format.extent677443 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectCovariance matrix-
dc.subjectpositive definitess-
dc.subjecthigh-dimensional estimation-
dc.subjectlinear shrinkage-
dc.subject.ddc519-
dc.titlePositive-definite correction of covariance matrix estimators via linear shrinkage-
dc.title.alternative선형 축소를 통한 공분산 행렬 추정량의 양정치 보정-
dc.typeThesis-
dc.contributor.AlternativeAuthor최영근-
dc.description.degreeDoctor-
dc.citation.pagesvii, 86-
dc.contributor.affiliation자연과학대학 통계학과-
dc.date.awarded2015-08-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share