Heading perception and the structure of the optic acceleration field

26 Apr 2022  ·  Charlie S. Burlingham, Mengjian Hua, Oliver Xu, Kathryn Bonnen, David J. Heeger ·

Visual estimation of heading in the human brain is widely believed to be based on instantaneous optic flow, the velocity of retinal image motion. However, we previously found that humans are unable to use instantaneous optic flow to accurately estimate heading and require time-varying optic flow (Burlingham and Heeger, 2020). We proposed the hypothesis that heading perception is computed from optic acceleration, the temporal derivative of optic flow, based on the observation that heading is aligned perfectly with a point on the retina with zero optic acceleration. However, this result was derived for a specific scenario used in our experiments, when retinal heading and rotational velocity are constant over time. We previously speculated that as the change over time in heading or rotation increases, the bias of the estimator would increase proportionally, based on the idea that our derived case would approximate what happens in a small interval of time (when heading and rotation are nearly constant). In this technical report, we characterize the properties of the optic acceleration field and derive the bias of this estimator for the more common case of a fixating observer, i.e., one that moves while counter-rotating their eyes to stabilize an object on the fovea. For a fixating observer tracking a point on a fronto-parallel plane, there are in fact two singularities of optic acceleration: one that is always at fixation (due to image stabilization) and a second whose bias scales inversely with heading, inconsistent with human behavior. For movement parallel to a ground plane, there is only one singularity of optic acceleration (at the fixation point), which is uninformative about heading. We conclude that the singularity of optic acceleration is not an accurate estimator of heading under natural conditions.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here