A model of how depth facilitates scene-relative object motion perception

Forskningsoutput: TidskriftsbidragArtikel i vetenskaplig tidskrift


Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer's retina and radically influences an object's retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object-otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object's retinal motion, improving the accuracy of the object's movement direction represented by motion signals.


Enheter & grupper
Externa organisationer
  • Colby College

Ämnesklassifikation (UKÄ) – OBLIGATORISK

  • Psykologi (exklusive tillämpad psykologi)
  • Bioinformatik (beräkningsbiologi)
TidskriftPLoS Computational Biology
Utgåva nummer11
StatusPublished - 2019 nov 14
Peer review utfördJa