The retina is an ideal system to probe what information is signaled through the brain, and how it is represented. Understanding what information is represented involves relating neural signals to the outside world, by predicting the response of the retina to a reasonably broad class of visual stimuli. In the first chapter, we build predictive models of retinal ganglion cell responses to random stimuli with fine spatial variations. Some of the methods used are: wavelets to deal with finite sampling noise, reverse correlation to find linear stimulus filters, kernel density estimation to fit nonlinearities non-parametrically, generalized linear models, and information theory to evaluate the models we obtain.
One of our main conclusions is that even our best models do poorly for a large proportion of cells. This is due to the difficulty of identifying from data the parameters of known nonlinearities which have been documented in various parts of retinal circuitry. Since the signals involved in these nonlinearities are not directly observed, in the second chapter we seek to identify them as hidden variables in models which contain sigmoidal nonlinearities, using Restricted Boltzmann Machines.
How then can we understand how the retina represents information without understanding completely what is represented? One way is to characterize the distribution of messages that the retina sends to the brain, ignoring how these relate to the visual world. In the third chapter (aided by Cyrille Rossant), we observe that spike counts across populations of retinal ganglion cells are close to being geometrically distributed. We leverage this to build models of the joint distribution of population spike trains over time. The main tool used here, well known to probabilists, is the probability generating functional.
Evaluating models and calculating quantities of information in neural data often boils down to estimating a Kullback-Leibler divergence between two probability distributions, as is the case in our first chapter. The last chapter proposes a novel estimator of divergences which partially enjoys the same invariance property as the Kullback-Leibler divergence itself. Estimation is based on recursive adaptive partitioning of data samples, coupled with context tree weighting methods.
|Advisor:||Berry, Michael J., II|
|School Location:||United States -- New Jersey|
|Source:||DAI-B 70/12, Dissertation Abstracts International|
|Subjects:||Neurosciences, Statistics, Computer science|
|Keywords:||Kullback-Leibler divergence, Restricted Boltzmann machine, Retinal ganglion cells, Spike trains|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be