Neural Implementation of Probabilistic Models of Cognition
Abstract
Bayesian models of cognition hypothesize that human brains make sense of data by representing probability distributions and applying Bayes' rule to find the best explanation for available data. Understanding the neural mechanisms underlying probabilistic models remains important because Bayesian models provide a computational framework, rather than specifying mechanistic processes. Here, we propose a deterministic neuralnetwork model which estimates and represents probability distributions from observable events  a phenomenon related to the concept of probability matching. Our model learns to represent probabilities without receiving any representation of them from the external world, but rather by experiencing the occurrence patterns of individual events. Our neural implementation of probability matching is paired with a neural module applying Bayes' rule, forming a comprehensive neural scheme to simulate human Bayesian learning and inference. Our model also provides novel explanations of baserate neglect, a notable deviation from Bayes.
 Publication:

arXiv eprints
 Pub Date:
 January 2015
 arXiv:
 arXiv:1501.03209
 Bibcode:
 2015arXiv150103209K
 Keywords:

 Computer Science  Neural and Evolutionary Computing;
 Quantitative Biology  Neurons and Cognition