Brain Motion Detection: Reichardt Detectors to Bayesian Inference

 74 min video

 2 min read

YouTube video ID: ReKjU4ZUAjg

Source: YouTube video by MIT OpenCourseWareWatch original video

PDF

The visual system detects motion with direction‑selective detectors that compare spatially displaced inputs after precise time delays, a principle captured by the Reichardt model. V1 neurons act as orientation‑selective filters in space‑time, encoding one‑dimensional velocity components. Area MT receives these 1D signals and combines them to represent two‑dimensional pattern motion, a distinction revealed by plaid‑stimulus experiments that separate component‑selective (V1) from pattern‑selective (MT) responses.

The Aperture Problem

Local measurements of an edge are geometrically ambiguous: many velocity vectors share the same perpendicular component, creating the aperture problem. The visual system resolves this ambiguity by integrating motion information across space and by exploiting unambiguous two‑dimensional features such as corners, occlusion boundaries, and contour continuity. Integration is not automatic; it is gated by perceptual organization that signals the presence of occlusion or relatable contours.

Bayesian Motion Perception

Perceptual inference follows a Bayesian rule: the posterior probability of a velocity equals the product of a sensory likelihood and an internal prior. A prior that favors slow speeds explains why low‑contrast stimuli appear to move slower than high‑contrast stimuli. When contrast is low, the likelihood becomes broad and fuzzy, allowing the slow‑speed prior to dominate the posterior and bias the perceived velocity toward zero.

Motion, Depth, and Motor Control

The brain interprets motion through a rigidity prior, often inferring three‑dimensional structure from two‑dimensional motion cues such as the kinetic depth effect. Optic flow patterns are used to calibrate posture; moving walls can shift adult posture and cause falls in children. To avoid mistaking retinal motion caused by eye movements for external motion, the brain discounts it using efference copies of motor commands rather than relying on proprioceptive feedback.

Mechanisms & Explanations

A Reichardt detector receives spatially offset inputs that are delayed to match the temporal offset of a moving stimulus; simultaneous arrival triggers a direction‑selective response. The intersection‑of‑constraints method resolves two‑dimensional motion by finding the common point where multiple one‑dimensional constraint lines intersect. Bayesian motion inference multiplies a contrast‑dependent likelihood by a slow‑speed prior; high contrast yields a sharp likelihood peak, while low contrast produces a broad likelihood that lets the prior shift the perceived velocity toward slower values.

  Takeaways

  • Motion perception begins with direction‑selective detectors that compare spatially displaced inputs with precise time delays, as described by the Reichardt model.
  • V1 neurons encode one‑dimensional velocity components, while MT neurons integrate these signals to represent two‑dimensional pattern motion, a distinction demonstrated with plaid stimuli.
  • The aperture problem creates geometric ambiguity for local edge motion, and the visual system resolves it by pooling information across space or by relying on unambiguous features such as corners and occlusion cues.
  • Bayesian inference explains why low‑contrast objects appear to move slower: a slow‑speed prior combines with a broad, low‑contrast likelihood, shifting the posterior toward slower velocities.
  • Optic flow and a rigidity prior allow the brain to infer three‑dimensional structure from two‑dimensional motion, and efference copies of eye‑movement commands, not proprioception, cancel retinal motion caused by eye movements.

Frequently Asked Questions

How does the brain resolve the aperture problem for moving edges?

The brain integrates motion signals over larger spatial regions and uses unambiguous cues such as corners, occlusion boundaries, and contour continuity. By combining multiple one‑dimensional constraints, it computes a common motion vector that satisfies all local measurements, eliminating the geometric ambiguity.

Why do low‑contrast stimuli appear to move slower than high‑contrast stimuli?

Low contrast weakens the sensory likelihood, making it broader and less informative; the brain’s prior that favors slow speeds then exerts a stronger influence on the posterior estimate, biasing perceived velocity toward slower values overall.

Who is MIT OpenCourseWare on YouTube?

MIT OpenCourseWare is a YouTube channel that publishes videos on a range of topics. Browse more summaries from this channel below.

Does this page include the full transcript of the video?

Yes, the full transcript for this video is available on this page. Click 'Show transcript' in the sidebar to read it.

Helpful resources related to this video

If you want to practice or explore the concepts discussed in the video, these commonly used tools may help.

Links may be affiliate links. We only include resources that are genuinely relevant to the topic.

PDF