Analysis of the motion of spatial patterns may be accomplished by analyzing the spatiotemporal variations caused when a spatially varying luminance waveform moves over the detector surface. Nonlinear transformations (such as squaring) of the input signal may give rise to signal (a `distortion product') that varies on a different spatial scale from that of the original, and can thus give rise to a motion signal that is processed by a different set of spatiotemporal filters. Experiments with patterns made by adding together two sinusoidal gratings, differing in spatial frequency or orientation and in temporal frequency, show that the human visual system can analyze the motion of the `difference-frequency' distortion products that would be introduced by squaring, and thus must contain mechanisms that use some nonlinear transformation of this sort. This raises a question: is the nonlinearity simply an inherent part of the transduction process, or do separate linear and nonlinear motion analyzers exist? We find that performance in motion discrimination tasks that require nonlinear analyzers declines rapidly for stimulus durations less than about 200 msecs and for temporal frequencies greater than about 1 Hz, whereas discriminations based on linear analyses are reliable and correct at durations down to 20 msecs and at temporal frequencies over 10 Hz. This suggests that the linear and nonlinear motion analyzers are different.
|