Publications

Detailed Information

The outcome‐representation learning model: a novel reinforcement learning model of the iowa gambling task

Cited 40 time in Web of Science Cited 43 time in Scopus
Authors

Haines, Nathaniel; Vassileva, Jasmin; Ahn, Woo-Young

Issue Date
2018-11
Publisher
Lawrence Erlbaum Associates Inc.
Citation
Cognitive Science, Vol.42 No.8, pp.2534-2561
Abstract
The Iowa Gambling Task (IGT) is widely used to study decision-making within healthy and psychiatric populations. However, the complexity of the IGT makes it difficult to attribute variation in performance to specific cognitive processes. Several cognitive models have been proposed for the IGT in an effort to address this problem, but currently no single model shows optimal performance for both short- and long-term prediction accuracy and parameter recovery. Here, we propose the Outcome-Representation Learning (ORL) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects' data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision-making in substance-using populations. Our work highlights the importance of using multiple model comparison metrics to make valid inference with cognitive models and sheds light on learning mechanisms that play a role in underweighting of rare events.
ISSN
0364-0213
Language
ENG
URI
https://hdl.handle.net/10371/163530
DOI
https://doi.org/10.1111/cogs.12688
Files in This Item:
There are no files associated with this item.
Appears in Collections:

Related Researcher

  • College of Social Sciences
  • Department of Psychology
Research Area Addiction, computational neuroscience, decision neuroscience, 계산 신경과학, 의사결정 신경과학, 중독

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share