Publications

Detailed Information

Scale-invariant representation of machine learning

DC Field Value Language
dc.contributor.authorLee, Sungyeop-
dc.contributor.authorJo, Junghyo-
dc.date.accessioned2022-08-22T09:16:46Z-
dc.date.available2022-08-22T09:16:46Z-
dc.date.created2022-05-16-
dc.date.issued2022-04-
dc.identifier.citationPhysical Review e, Vol.105 No.4, p. 044306-
dc.identifier.issn2470-0045-
dc.identifier.urihttps://hdl.handle.net/10371/184359-
dc.description.abstractThe success of machine learning has resulted from its structured representation of data. Similar data have close internal representations as compressed codes for classification or emerged labels for clustering. We observe that the frequency of internal codes or labels follows power laws in both supervised and unsupervised learning models. This scale-invariant distribution implies that machine learning largely compresses frequent typical data, and simultaneously, differentiates many atypical data as outliers. In this study, we derive the process by which these power laws can naturally arise in machine learning. In terms of information theory, the scale-invariant representation corresponds to a maximally uncertain data grouping among possible representations that guarantee a given learning accuracy.-
dc.language영어-
dc.publisherAMER PHYSICAL SOC-
dc.titleScale-invariant representation of machine learning-
dc.typeArticle-
dc.identifier.doi10.1103/PhysRevE.105.044306-
dc.citation.journaltitlePhysical Review e-
dc.identifier.wosid000789436200008-
dc.identifier.scopusid2-s2.0-85128853734-
dc.citation.number4-
dc.citation.startpage044306-
dc.citation.volume105-
dc.description.isOpenAccessN-
dc.contributor.affiliatedAuthorJo, Junghyo-
dc.type.docTypeArticle-
dc.description.journalClass1-
Appears in Collections:
Files in This Item:
There are no files associated with this item.

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share