go back

Bayesian cue integration as a developmental outcome of reward mediated learning

Thomas Weisswange, Constantin Rothkopf, Tobias Rodemann, Jochen Triesch, "Bayesian cue integration as a developmental outcome of reward mediated learning", PLoS ONE, vol. 6, no. 7, 2011.

Abstract

Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.



Download Bibtex file Download PDF

Search