Dissertation/Thesis Abstract

Progress on deciphering the retinal code
by Sadeghi, Kolia Siamack, Ph.D., Princeton University, 2009, 90; 3388084
Abstract (Summary)

The retina is an ideal system to probe what information is signaled through the brain, and how it is represented. Understanding what information is represented involves relating neural signals to the outside world, by predicting the response of the retina to a reasonably broad class of visual stimuli. In the first chapter, we build predictive models of retinal ganglion cell responses to random stimuli with fine spatial variations. Some of the methods used are: wavelets to deal with finite sampling noise, reverse correlation to find linear stimulus filters, kernel density estimation to fit nonlinearities non-parametrically, generalized linear models, and information theory to evaluate the models we obtain.

One of our main conclusions is that even our best models do poorly for a large proportion of cells. This is due to the difficulty of identifying from data the parameters of known nonlinearities which have been documented in various parts of retinal circuitry. Since the signals involved in these nonlinearities are not directly observed, in the second chapter we seek to identify them as hidden variables in models which contain sigmoidal nonlinearities, using Restricted Boltzmann Machines.

How then can we understand how the retina represents information without understanding completely what is represented? One way is to characterize the distribution of messages that the retina sends to the brain, ignoring how these relate to the visual world. In the third chapter (aided by Cyrille Rossant), we observe that spike counts across populations of retinal ganglion cells are close to being geometrically distributed. We leverage this to build models of the joint distribution of population spike trains over time. The main tool used here, well known to probabilists, is the probability generating functional.

Evaluating models and calculating quantities of information in neural data often boils down to estimating a Kullback-Leibler divergence between two probability distributions, as is the case in our first chapter. The last chapter proposes a novel estimator of divergences which partially enjoys the same invariance property as the Kullback-Leibler divergence itself. Estimation is based on recursive adaptive partitioning of data samples, coupled with context tree weighting methods.

Indexing (document details)
Advisor: Berry, Michael J., II
Commitee:
School: Princeton University
School Location: United States -- New Jersey
Source: DAI-B 70/12, Dissertation Abstracts International
Source Type: DISSERTATION
Subjects: Neurosciences, Statistics, Computer science
Keywords: Kullback-Leibler divergence, Restricted Boltzmann machine, Retinal ganglion cells, Spike trains
Publication Number: 3388084
ISBN: 9781109522884
Copyright © 2019 ProQuest LLC. All rights reserved. Terms and Conditions Privacy Policy Cookie Policy
ProQuest