How does the visual system determine the direction and speed of moving objects? In the primate brain, visual motion is processed at several stages. Neurons in primary visual cortex (V1), filter incoming signals to extract the motion of oriented edges at a fine spatial scale. V1 neurons send these measurements to the extrastriate visual area MT, where neurons are selective for direction and speed in a manner that is invariant to simple or complex patterns.
Previous theoretical work proposed that MT neurons achieve selectivity to pattern motion by combining V1 inputs consistent with a common velocity. Here, we performed two sets of experiments to test this hypothesis. In the first experiment, we recorded single-unit V1 and MT responses to drifting sinusoidal gratings and plaids (two gratings superimposed). These stimuli either had jointly varying direction and drift rate (consistent with a constant velocity) or independently varying direction and drift rate. In the second experiment, we presented arbitrary, randomly chosen combinations of gratings in rapid succession, to sample as widely as possible the space of stimuli that could excite or suppress neural responses.
Responses to single gratings alone were insufficient to uniquely identify the organization of MT selectivity. To account for MT responses to both simple and compound stimuli, we developed new versions of an existing cascaded linear-nonlinear model in which each MT neuron pools inputs from V1. We fit these models to our data. By comparing the performance of the different model variants and examining their parameters that best accounted for the data, we showed that MT responses are best described when selectivity is organized along a common velocity. This confirms previous predictions that MT neurons are selective for the arbitrary motion of objects, independent of object shape or texture. We explore new model variants of MT computation that capture this behavior. These studies show that in order to characterize sensory computation, stimuli must be complex enough to engage the nonlinear aspects of neural selectivity. By exploring different linear-nonlinear model architectures, we identified the essential components of MT computation. Together, these provide an effective framework for characterizing changes in selectivity between connected sensory areas.
Supplementary materials: figures 3.4(a-e), 3.10(a-e), and 3.14(a-e) are rendered as movies.
Some files may require a special program or browser plug-in. More Information
|Advisor:||Movshon, J Anthony, Simoncelli, Eero P.|
|Commitee:||Daw, Nathaniel D., DeAngelis, Gregory C., Kiorpes, Lynne|
|School:||New York University|
|Department:||Center for Neural Science|
|School Location:||United States -- New York|
|Source:||DAI-B 78/05(E), Dissertation Abstracts International|
|Keywords:||MT, Models, Motion, Neurons, Receptive fields, Visual motion|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be