Publications
Detailed Information
Consensus and Risk Aversion Learning in Ensemble of Multiple Experts for Long-Tailed Classification
Cited 0 time in
Web of Science
Cited 0 time in Scopus
- Authors
- Issue Date
- 2024-07
- Citation
- IEEE Access, Vol.12, pp.97883-97892
- Abstract
- Recent expert ensemble methods for long-tailed recognition encourage diversity by maximizing KL divergence between the predictions of experts. However, the excessive diversity using KL divergence, which has no upper bound, induces inaccurate predictions of experts. To address this issue, we propose a new learning method for expert ensemble, which obtains the consensus by aggregating the predictions of experts (Consensus) and maximizes the expected prediction accuracy of each expert without excessive diversity from the consensus (Risk Aversion). To implement this learning scheme, we propose a new loss derived from R & eacute;nyi Divergence. We provide both empirical and theoretical analysis of the proposed method along with a stability guarantee, which is not guaranteed at the existing methods. Thanks to this stability, the proposed method continues to improve performance as the number of experts increases, while the existing methods do not. The proposed method achieves state-of-the-art performance for any number of experts. Furthermore, the proposed method operates robustly even when evaluated by varying the imbalance factor.
- ISSN
- 2169-3536
- Files in This Item:
- There are no files associated with this item.
Item View & Download Count
Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.