Home The visual representation of space in the primate brain
Article Open Access

The visual representation of space in the primate brain

  • Stefan Dowiasch

    Stefan Dowiasch studied physics at the University of Marburg and received his doctoral degree in 2015 for the investigation of visual perceptual stability and the processing of self-motion information using neurophysiology, psychophysics, and neuropsychology. After a Post-Doc at the University of Marburg where he investigated the temporal encoding in visual cortex during eye movements, he became the chief scientific officer and head of the software department at Thomas RECORDING GmbH. There, he developed new hard- & software products for basic research, especially in the field of neurophysiology and established two systems for early detection of Parkinson’s disease by means of non-invasive eye movement measurements as lead-PI of a BMBF-funded collaborative project. In 2022 he moved back to the University of Marburg as a scientific assistant, expanding his research focus from neurophysiological and psychophysical studies on eye movements and spatial perception towards biomarker research and their use in clinical practice and artificial neural networks.

    , Andre Kaminiarz

    Andre Kaminiarz studied Biology at Ruhr-University Bochum and obtained his doctoral degree from the University of Marburg in 2011 for the investigation of the localization of stimuli during simulated self- and object motion conducting electrophysiological and psychophysical experiments. Since 2011 he worked first as a Post-Doc and since 2013 as scientific assistant in the group of Frank Bremmer. Since 2013 he also works as animal welfare officer at the University of Marburg.

    and Frank Bremmer

    Frank Bremmer received a Diploma degree in Physics from the University of Marburg in 1989. In 1994, he obtained his PhD from Ruhr-University Bochum, Germany, at the Faculty of Biology in the group of Klaus-Peter Hoffmann. After spending two years at the Collège de France in Paris, France, as a PostDoc, he returned to Ruhr-University Bochum. There he obtained his Habilitation in Neurobiology in 2000. Frank Bremmer has been Professor of Applied Physics and Neurophysics at the Faculty of Physics, University of Marburg, since 2001. His research interests are in the field of systems neuroscience including the multisensory representation of space and motion and spatial perception during eye, head, and body movements. From 2004 to 2009, he was speaker of the DFG-Research Training Group (RTG) – 885 – Neuro- Act. Since 2013, he has been speaker of the DFG-funded German-Canadian IRTG/CREATE-1901-The Brain in Action. Since 2014, he has been member of the steering committee of the CRC/TRR-135 Cardinal mechanisms of perception. Since 2021 he has been Co-speaker of the Cluster project The Adaptive Mind funded by the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK). In 2008, Frank Bremmer was elected the founding director of the newly established Graduate Center for Life- and Natural Sciences at the University of Marburg. In 2010, he was elected Vice President of the University for a 3-year term. During that time, he was responsible for research, technology transfer, support of young scientists and international relations of the University of Marburg. In addition, he was appointed as member of the Executive Board of the German University Association of Advanced Graduate Training (UniWind, 4-year term), and as member of the steering committee of the Council for Doctoral Education of the European University Association (3-year term). In 2016 and 2020, he was elected Member of the German Research Foundation (DFG) Review Board 206 ‘Neurosciences.’ In 2018 he was elected the founding director of the newly established Center for Mind, Brain and Behavior, CMBB, at the Universities of Marburg and Gießen.

    EMAIL logo
Published/Copyright: October 11, 2022
Become an author with De Gruyter Brill

Abstract

One of the major functions of our brain is to process spatial information and to make this information available to our motor systems to interact successfully with the environment. Numerous studies over the past decades and even centuries have investigated, how our central nervous system deals with this challenge. Spatial information can be derived from vision. We see, where the cup of coffee stands at the breakfast table or where the un-mute-button of our video-conference tool is. However, this is always just a snapshot, because the location of the projection of the cup or the un-mute-button shifts across the retina by each eye movement, i.e., 2–3 times per second. So, where exactly in space are objects located? And what signals guide self-motion and navigation through our environment? While also other sensory signals (vestibular, tactile, auditory, even smell) can help us localize objects in space and guide our navigation, here, we will focus on the dominant sense in primates: vision. We will review (i) how visual information is processed to eventually result in space perception, (ii) how this perception is modulated by action, especially eye movements, at the behavioral and at the neural level, and (iii) how spatial representations relate to other encodings of magnitude, i.e., time and number.

Zusammenfassung

Eine der wichtigsten Aufgaben des zentralen Nervensystems besteht darin, räumliche sensorische Information zu verarbeiten und dem motorischen System zur Verfügung zu stellen, um erfolgreich mit der Umwelt zu interagieren. Schon von Helmholtz beschäftigte sich Ende des 19. Jahrhunderts mit der Frage, wie das Gehirn diese Aufgabe löst. Rauminformation kann aus visuellen Signalen abgeleitet werden. Wir sehen, wo vor uns auf dem Frühstückstisch die Kaffeetasse steht oder sich der Mikrofonknopf beim Videokonferenzsystem befindet. Dies ist jedoch stets nur eine Momentaufnahme, denn die Lage des Bildes der Tasse oder des Knopfes auf der Retina verschiebt sich mit jeder Augenbewegung. Wo also genau befinden sich Objekte in der Welt? Und welche Signale helfen uns, zielgerichtet in der Umwelt zu navigieren? Auch wenn andere Sinnessysteme (vestibulär, taktil, auditorisch und selbst Riechen) ebenfalls zur Raumkodierung beitragen, konzentrieren wir uns in diesem Übersichtsartikel auf das dominierende Sinnessystem der Primaten, das Sehen. Wir werden zusammenfassen, wie visuelle Information verarbeitet und zu Raumwahrnehmung wird, wie Handlung – insbesondere Augenbewegungen – diese Wahrnehmung auf Verhaltensebene und neuronal moduliert und wie die Verarbeitung von Raum, Zeit- und Mengeninformation zusammenhängen.

Space and the brain

Vision starts in the retina of the eye, where the energy of light is transduced into photoreceptor potentials, with the highest density of photoreceptors marking a region called fovea. This transduction is followed by a preprocessing eventually reaching retinal ganglion cells (RGCs), whose axons leave the eye and send their information centripetally. The major projection zone of RGCs is the lateral geniculate nucleus of the thalamus (LGN), from where neurons project into primary visual cortex, also called striate cortex or area V1 in primates, or area 17. This retino–geniculo–cortical route reveals an important functional characteristic called retinotopy: information about neighboring objects in the world is represented by the activity of neighboring neurons. Importantly, visual information processing at the cortical level does not end in area V1. Instead, here processing starts through a network of areas all dedicated to the analysis of specific features of scenes in the outside world and to making this information available to our motor systems e.g., to reach for the cup of coffee in the morning or to avoid an obstacle while navigating through an environment. Other retinofugal projections target e.g., the superior colliculus (SC), or the nucleus of the optic tract (NOT). The functional role of these and other projections, however, will not be discussed in further detail in this review.

The human visual cortical system is parcellated into numerous functional modules. Likewise, the visual cortical system of the macaque monkey, the premium animal model of human vision and sensorimotor processing, consists of more than 30 different areas (Markov et al., 2013). Starting from the work of Hubel and Wiesel (Hubel and Wiesel, 1959), we know that neurons in area V1 typically respond to oriented lines or gratings presented to their visual receptive field (RF). This RF location is the only part of space “seen” by this neuron; for the rest it is literally blind. Accordingly, firing of this neuron represents spatial information implicitly. Yet, it is generally assumed, that what we call space perception is based on processing in downstream cortical areas. Notably, space is not represented equally in area V1. The largest representation within area V1 comprises the smallest part of visual space, the foveal and parafoveal region, providing us with the highest possible visual resolution. This highly nonuniform representation of visual space causes humans and other primates to move their eyes more often than their heart beats, typically 2–3 times per second (Hayhoe and Ballard, 2005).

From area V1, visual cortical processing takes two major routes, the so-called ventral and dorsal pathways (Ungerleider and Mishkin, 1982). Processing along the ventral pathway is dedicated to object vision in a wide sense. A key stage along this route is area V4, where neurons are tuned to certain features of an object, like color, edges or convex/concave surfaces (Conway, 2009). In downstream areas of inferior–temporal (IT) cortex many neurons respond preferably to objects themselves (Arcaro and Livingstone, 2021). Likewise, studies in patients with lesions of this part of the brain reveal impairments in object recognition (Barton, 2011). In addition to objects, the processing of faces is key to the visual function of the ventral pathway, in monkeys (Hesse and Tsao, 2020) and humans (Khuvis et al., 2021).

The dorsal visual cortical pathway, on the other hand, is implicated in the processing of (self-)motion and spatial information. The middle temporal area (area MT) is a key stage (and bottleneck) of the dorsal pathway (Albright, 1984), with many neurons responding directionally selective to visual motion. From here, projections target extrastriate and parietal cortex: the medial–superior temporal area (area MST), the lateral and ventral intraparietal areas (areas LIP and VIP), and area 7a, defining the highest stage of the dorsal visual cortical pathway. Projections of all latter areas are directed towards frontal cortex including premotor cortex (PM) and the frontal and supplementary eye fields (FEF and SEF, respectively). In addition to these neo-cortical areas, another region of the brain has been shown to be involved in a key aspect of spatial perception, i.e., navigation: the hippocampal formation in the medial temporal lobe of the brain (Tukker et al., 2022). The specific roles of these areas and regions for spatial vision and perception will be discussed below.

Eye movements and visual perceptual stability

In order to view objects with the highest possible resolution at the fovea, we typically perform foveating eye movements. Each such eye movement induces a shift of the image of the world on the retina. Yet, despite this image shift, we perceive the outside world as stable. This phenomenon is called visual perceptual stability, and it is highly remarkable. One class of such foveating eye movements is called saccades. They move the eye with more than 500°/s towards the next object, thereby lasting only a few tens of milliseconds (Gibaldi and Sabatini, 2021). Smooth pursuit eye movements allow to track moving targets. And even during fixation our eyes do not stand still (Rucci and Poletti, 2015). Instead, a mixture of micro-saccades, drifts and tremor keep the fixating eyes in motion, thereby avoiding what is known as fading out, i.e., the disappearance of the visual image due to a perfectly stabilized retinal image. The next section will describe what is known about visual perceptual stability and its temporal dynamics.

Space perception and action: behavior

Ever since Helmholtz (1867), researchers were fascinated by the phenomenon of visual perceptual stability, i.e., the ability to assign the movement of the image of the environment on their retina to one’s own eye movements. While we will address below the question about the neural signature of visual perceptual stability, progress in measurement techniques and experimental equipment revealed that visual perceptual stability is not complete. Instead, it is accompanied by brief, sometimes rather large perceptual distortions, which go unnoticed in everyday life (for reviews see e.g., Ross et al., 2001; Binda and Morrone, 2018).

The temporal dynamics of saccades make it experimentally challenging to investigate perisaccadic visual spatial perception (Bremmer and Krekelberg, 2003). In such experiments, stimuli typically are presented briefly before, during or after a saccade, and participants indicate the perceived location of the flash. Dependent on the exact experimental conditions, two error patterns are observed (Lappe et al., 2000). In case of flashed stimuli in otherwise darkness, results show a bi-phasic pattern of localization error (shift). Prior to the saccade, when the eyes are still not moving, perceived stimulus locations are shifted in the direction of the upcoming saccade. Around saccade onset, the direction of mislocalization starts to reverse, overshooting in the opposite direction and coming back to veridical around 100 ms after saccade offset. Obviously, this dynamical pattern can only be tested with stimuli lasting much shorter (<10 ms) than the saccade itself. Also, during everyday life within a rather stable visual environment, this perceptual instability is not visible. Nevertheless, it is a robust experimental finding, which allows to trace back the neural mechanisms for spatial vision (Morris et al., 2012, see below). Notably, a second pattern emerges under ambient light conditions. This mislocalization can be best described as a compression of perceptual space: regardless of where in the visual field a stimulus is flashed perisaccadically, it is perceived near the saccade target (or the landing points of the eyes. Ross et al., 1997). A neural correlate of this visual illusion has been identified in the macaque visual cortical system (Krekelberg et al., 2003, see below).

Saccadic suppression

These are only two of many examples of saccadic misperception. In addition to other modulatory effects (e.g., on temporal or numerosity perception, see below), vision as a whole is markedly influenced by saccades. Contrast sensitivity is reduced, with the decrease starting to evolve already prior to saccade onset and lasting until 100 ms thereafter. This effect is called saccadic suppression (Diamond et al., 2000). Figure 1 illustrates an experimental approach to investigate saccadic suppression and its time course. Remarkably, reduced visibility is predominantly found for information processed in the dorsal (i.e., motion sensitive) visual cortical pathway (Burr et al., 1994). Furthermore, the finding that saccadic suppression starts to evolve already before saccade onset suggests that it is foremost an active process likely playing a key role in visual perceptual stability.

Figure 1: 
(A) Schematic of the time course of an experimental paradigm to test for saccadic suppression. Across trials, a brief luminant visual stimulus is shown at various times relative to a saccade. Colored lines indicate the (exemplary) time of stimulus presentation (red), example horizontal and vertical eye traces (blue), and the timing of the fixation and saccade target (purple and green, respectively). (B) Comparison of behavioral and physiological measures of saccadic suppression: for the behavioral data (black line. Data from: Diamond et al., 2000), the horizontal axis shows the time of stimulus presentation relative to saccade onset, and the right vertical axis indicates normalized contrast sensitivity. Neuronal data, as detailed in the next section, were shifted along the time axis to correct for response and processing latencies and represent neuronal excitability (left vertical axis) of populations of neurons from macaque areas MT and MST (cyan and blue curves, respectively) and area VIP (red curve). The time course of neuronal excitability in all three motion areas of the macaque revealed a good qualitative match with the time course of perceptual loss of sensitivity around saccades in human observers. Adapted and modified from Bremmer et al., J Neurosci., 2009. Copyright 2009 Society for Neuroscience.
Figure 1:

(A) Schematic of the time course of an experimental paradigm to test for saccadic suppression. Across trials, a brief luminant visual stimulus is shown at various times relative to a saccade. Colored lines indicate the (exemplary) time of stimulus presentation (red), example horizontal and vertical eye traces (blue), and the timing of the fixation and saccade target (purple and green, respectively). (B) Comparison of behavioral and physiological measures of saccadic suppression: for the behavioral data (black line. Data from: Diamond et al., 2000), the horizontal axis shows the time of stimulus presentation relative to saccade onset, and the right vertical axis indicates normalized contrast sensitivity. Neuronal data, as detailed in the next section, were shifted along the time axis to correct for response and processing latencies and represent neuronal excitability (left vertical axis) of populations of neurons from macaque areas MT and MST (cyan and blue curves, respectively) and area VIP (red curve). The time course of neuronal excitability in all three motion areas of the macaque revealed a good qualitative match with the time course of perceptual loss of sensitivity around saccades in human observers. Adapted and modified from Bremmer et al., J Neurosci., 2009. Copyright 2009 Society for Neuroscience.

Not only saccades, but also smooth pursuit modulates visual spatial perception. Perceived stimulus locations are shifted in the direction of pursuit, with the error being dependent on the stimulus location with respect to the fovea (van Beers et al., 2001; Dowiasch et al., 2020). Like for saccades (see below), a combined neurophysiological and modeling approach provided an idea about the underlying neural mechanisms of this perceptual illusion (Dowiasch et al., 2016).

Self-motion and navigation

Not only foveating, but also reflexive eye movements like the optokinetic nystagmus (OKN) induce systematic spatial mislocalization (Kaminiarz et al., 2007). OKN is induced either by frontoparallel motion or by rotational movements. During everyday life, however, we often navigate in full 3D space, dominated by translatory forward self-motion. For successful navigation, we need to monitor our self-motion direction (heading) and our travel distance (path integration). Without eye movements, heading perception is close to veridical. Typical errors are in the range of 2°. This performance can be well explained by the readout of populations of neurons being responsive to the direction of (visually simulated) self-motion (Lappe et al., 1996; Schmitt et al., 2020). Eye movements introduce distortions of the optic flow field (Matthis et al., 2022). However, humans can compensate for such spatio-temporal warping induced by smooth eye movements during heading judgements (Lappe et al., 1999) and a neural signature of this ability had been identified in monkey extrastriate and parietal cortex (Bremmer et al., 2010; Kaminiarz et al., 2014). It was only recently that also the effect of saccades on heading perception was investigated (Figure 2). In these experiments, brief (40 ms) self-motion stimuli, simulating self-motion across a ground plane in various directions, were presented perisaccadically (Bremmer et al., 2017). Indeed, saccades led to a perceptual compression of heading towards the line of sight with a time-course very similar to those of the above-described effects of perisaccadic modulations of vision. Based on neurophysiological recordings in monkeys and modeling, we now have a good understanding of the neural basis of this visual perceptual illusion.

Figure 2: 
Panels A and B show the temporal sequence (A) and spatial layout (B) of the experiment. Stimuli were presented on a tangent screen covering the central 81° × 65° of the visual field. Across trials, simulated self-motion was pseudo-randomized in one of five directions: forward to the left (−30° and −15°), straight ahead (0°), or forward to the right (15° and 30°). Self-motion stimuli consisted of five consecutive frames of 100% coherent dot motion, i.e., lasting 40 ms. Each trial started with presentation of the fixation target and the stationary ground-plane stimulus. After a randomized time, the fixation target was switched off and the saccade target was switched on (until the end of the trial), inducing a visually guided upward saccade of 10°. Across trials, the onset of the self-motion stimulus ranged from about 200 ms before to 200 ms after saccade onset. At the end of each trial, the saccade target and the ground plane stimulus were switched off and a ruler with a random sequence of numbers was presented on the screen. Subjects had to indicate via keyboard input the number on the ruler that appeared closest to their perceived heading direction. Panels C and D indicate the time course of compression of perceived heading. Responses from one example subject are shown in C. Symbols represent data from single trials: upward pointing triangles for heading to the right (magenta: +15°, red: +30°), downward pointing triangles for heading to the left (dark cyan: −15°, green: −30°), and circles for heading straight ahead (blue, 0°). Solid lines represent running means of five consecutive samples each, assigned to the central sample value. Dashed lines show the performance for the same experiment during continuous fixation. Panel D indicates compression of perceived heading, defined as the normalized standard deviation of the five time-courses of perceived heading for all subjects (colored lines). The robust effect was observed in all participants. Maximum compression as indicated by the minimum value of the normalized standard deviation was observed just prior to saccade onset. Adapted and modified from (Bremmer et al., 2017).
Figure 2:

Panels A and B show the temporal sequence (A) and spatial layout (B) of the experiment. Stimuli were presented on a tangent screen covering the central 81° × 65° of the visual field. Across trials, simulated self-motion was pseudo-randomized in one of five directions: forward to the left (−30° and −15°), straight ahead (0°), or forward to the right (15° and 30°). Self-motion stimuli consisted of five consecutive frames of 100% coherent dot motion, i.e., lasting 40 ms. Each trial started with presentation of the fixation target and the stationary ground-plane stimulus. After a randomized time, the fixation target was switched off and the saccade target was switched on (until the end of the trial), inducing a visually guided upward saccade of 10°. Across trials, the onset of the self-motion stimulus ranged from about 200 ms before to 200 ms after saccade onset. At the end of each trial, the saccade target and the ground plane stimulus were switched off and a ruler with a random sequence of numbers was presented on the screen. Subjects had to indicate via keyboard input the number on the ruler that appeared closest to their perceived heading direction. Panels C and D indicate the time course of compression of perceived heading. Responses from one example subject are shown in C. Symbols represent data from single trials: upward pointing triangles for heading to the right (magenta: +15°, red: +30°), downward pointing triangles for heading to the left (dark cyan: −15°, green: −30°), and circles for heading straight ahead (blue, 0°). Solid lines represent running means of five consecutive samples each, assigned to the central sample value. Dashed lines show the performance for the same experiment during continuous fixation. Panel D indicates compression of perceived heading, defined as the normalized standard deviation of the five time-courses of perceived heading for all subjects (colored lines). The robust effect was observed in all participants. Maximum compression as indicated by the minimum value of the normalized standard deviation was observed just prior to saccade onset. Adapted and modified from (Bremmer et al., 2017).

Path integration is another key feature of self-motion processing and navigation. It is often tested in the context of homing, i.e., the ability to return to the starting point of a journey after an outbound route including translations and rotations. Homing appears to be a universal feature across the animal kingdom, documented not only in humans (Warren, 2019), but also in insects, rodents, birds, camels and elephants (for reviews, see e.g., Heinze et al., 2018; Poulter et al., 2018).

Space perception and action: physiology

Saccadic suppression

As discussed above, vision of luminant stimuli is impaired by saccades. These stimulus features are processed predominantly along the dorsal visual pathway. In search for a neural correlate of the behavioral finding, various areas along this pathway in the macaque have been tested for their perisaccadic responsiveness (Bremmer et al., 2009. Figure 1B). As hypothesized, responsiveness of neurons predominantly in motion sensitive areas MT, MST, and VIP were affected perisaccadically. The time-course of this modulation was very similar to human behavioral data. Notably, functional equivalents of all three areas have been identified in humans (Areas MT and MST: Huk et al., 2002; Area VIP: Bremmer et al., 2001), suggesting a functional link between response properties of these areas and saccadic suppression.

Predictive remapping

While saccadic suppression is thought to facilitate visual perceptual stability, we are not blind across saccades (Nicolas et al., 2021). So, the question arises, how we link images of the world as seen before and after an eye movement and if and how this link could contribute to visual perceptual stability. In a seminal study, Duhamel, Colby and Goldberg provided first evidence for a potential neural basis of the behavioral effects (Duhamel et al., 1992). The authors had trained macaque monkeys on a visually guided saccade task and probed the responsiveness at the neuron’s current and future receptive field at various times relative to saccade onset. Remarkably, about 40% of neurons in area LIP started to respond to stimuli presented at the future RF even before the saccade, i.e., when the eyes still fixated the initial target, a phenomenon termed predictive remapping. Neurons of this type anticipate the sensory consequences of an upcoming saccade. Similar functional characteristics have also been observed in other brain structures involved in the control of saccadic eye movements, especially the frontal eye fields (FEF) in frontal cortex and the midbrain superior colliculus (SC). It thus seems that saccadic control areas are also involved in facilitating visual perceptual stability.

Gain fields

Predictive remapping provides a plausible neural basis for visual perceptual stability. Yet, it cannot account for perisaccadic distortions of space perception, i.e., shift and compression. An alternative hypothesis has been put forward by Morris, Krekelberg and Bremmer (Morris et al., 2012). The authors studied the dynamics of so-called gain fields, i.e., an influence of the position of the eyes in the orbits on neuronal discharges (Bremmer, 2000; Morris et al., 2013, 2016). Modeling studies showed that such response properties are suited to represent visual information in a head-centered frame of reference (Bremmer et al., 1998; Dowiasch et al., 2016). Different from previous studies, the approach of Morris and colleagues allowed to probe for the temporal dynamics of eye position effects across saccades. Results were surprising: average activity of a population of neurons from parietal cortex (areas MT, MST, LIP, and VIP) started to change prior to saccade onset. Yet, the dynamics of this change in activity were slower than the eye movement itself. This functional characteristic allows to explain the above described perisaccadic shift, thereby suggesting that gain fields play an important role in spatial perception.

Perisaccadic modulation of visual receptive fields

A shift of perceived spatial locations is one of two types of perisaccadic mislocalization. The other is a so-called perisaccadic compression. Krekelberg and colleagues (2003) could show that such a compression of visual perceptual space can be traced to a representation of retinal position in cortical areas MT and MST of the monkey. An alternative explanation was put forward by Goldberg and colleagues, demonstrating a perisaccadic expansion of the visual receptive fields of neurons in macaque area LIP (Wang et al., 2016). Like for area MT and MST, also for area LIP a functional equivalent has been identified in human parietal cortex (Konen et al., 2004), suggesting a leading role of these areas for the perisaccadic modulation of space perception.

Head-centered encoding of visual space

In all findings described above, there was an implicit assumption or even explicit statement: except for brief, perisaccadic modulations, the location of a visual receptive field with respect to the fovea does not change. This, however, is not the case. In another seminal work, Duhamel and colleagues could show that about one third of the neurons in macaque area VIP show head-centered visual receptive fields (Duhamel et al., 1997). As of today, it is unclear how neurons in area VIP achieve this remarkable response behavior. While speculating, this could mean that these neurons receive input from the whole retina (via projections through the LGN and visual cortex), but only part of this input is gated for a given eye position. Follow-up studies revealed that response latencies of VIP neurons encoding visual space in head-centered coordinates are on average longer than response latencies of neurons encoding visual space in an eye centered reference frame (Avillac et al., 2005; Schlack et al., 2005). This might suggest that it takes the brain a few 10 milliseconds to transform visual spatial information from the initial, i.e., eye centered encoding to a head-centered encoding. More experimental and computational work, however, is needed to better understand this remarkable finding.

Self-motion

Today, we have a rather good understanding of the basic principles of the neural processing of self-motion information, especially heading (Noel and Angelaki, 2022). Nevertheless, the control of self-motion can be a challenging task. Under certain circumstances, immediate adjustments might be required to keep on track. Hence, it would appear advantageous if the processing of self-motion information was predictive and quasi-reflexive, i.e., independent from attentional load. Predictive coding is suggested to facilitate sensory processing by attenuating responses to predictable sensory information and enhancing responses to unpredicted events like unexpected changes in heading (Friston, 2018). Along the same vein, a preattentive processing of self-motion direction could accelerate and thereby facilitate successful navigation irrespective of cognitive load. In this respect, heading would be different to path integration whose accuracy has been shown to be modulated by a secondary task (Glasauer et al., 2009).

A specific electroencephalography (EEG)-component, the so-called visual mismatch negativity (vMMN), has been suggested to be indicative of predictive and preattentive processing of sensory stimuli (Stefanics et al., 2018). The MMN is typically recorded in an oddball experiment employing standard and deviant stimuli, presented typically in 80 and 20% of the trials, respectively, and it is computed as the difference between two event-related potentials (ERPs). Deviant stimuli elicit a more negative N2 ERP-component than frequently presented standard stimuli, which leads to the MMN (deviant-standard). In the predictive coding framework, the MMN carries the prediction error signal elicited by the mismatch of a sensory event with the predictions formed by prior experience.

A recent study tested the hypothesis of a predictive encoding of visual self-motion information in humans and macaque monkeys with similar experimental protocols (Schmitt et al., 2021). Participants were presented visually simulated self-motion (Forward to the left and right) across a ground plane. Visual evoked potential (VEPs) for identical self-motion directions showed a visual mismatch negativity (vMMN) between standard and deviant trials in both humans and monkeys (Figure 3) thereby suggesting a predictive encoding of heading.

Figure 3: 
(A) The experimental paradigm. Each trial started with a stationary ground plane stimulus, followed by a stimulus mimicking self-motion (blue arrow) into one of two directions. (B) Topographic maps show the mean difference of the ERPs for identical headings as recorded in deviant and standard trials. The analysis window ranged from 127 to 143 ms after self-motion onset. The MMN is indicated by the blue color over parietal and occipital electrodes. It was slightly lateralized with a stronger MMN for contraversive headings. (Adapted and modified from Schmitt et al., 2021).
Figure 3:

(A) The experimental paradigm. Each trial started with a stationary ground plane stimulus, followed by a stimulus mimicking self-motion (blue arrow) into one of two directions. (B) Topographic maps show the mean difference of the ERPs for identical headings as recorded in deviant and standard trials. The analysis window ranged from 127 to 143 ms after self-motion onset. The MMN is indicated by the blue color over parietal and occipital electrodes. It was slightly lateralized with a stronger MMN for contraversive headings. (Adapted and modified from Schmitt et al., 2021).

Navigation

Numerous studies in rodents as well as primates – including humans – have provided clear evidence for a causal involvement of the hippocampus and the parahippocampal, entorhinal and retrosplenial cortices in spatial navigation, scene detection and spatial memory (e.g., Moser et al., 2017). Different functionally defined classes of neurons are considered to contribute to establishing of what has been termed a cognitive map, i.e., a representation of the spatial environment to support locating oneself and to the guidance of future navigational action (McNaughton et al., 2006). Especially the role of the hippocampus has been unveiled by these studies. This critical involvement in (real or simulated) spatial navigation also in humans has recently been further substantiated by intracranial recordings in presurgical epileptic patients (Kunz et al., 2019). While being invasive, only this approach allows for an understanding of visual spatial processing at the highest possible spatial and temporal resolution.

There appears to be a mutual exclusion of research referring to the (para-)hippocampal formation and related brain regions and their role for spatial navigation and work investigating self-motion responses at the visual cortical level. This is surprising given that it appears obvious if not imperative that both sub-systems are functionally linked. As of today, only very few studies have aimed to unveil links between both networks. Research on monkeys showed that a subset of MST neurons exhibits functional properties similar to hippocampal place-cells (Froehler and Duffy, 2002). Likewise, fMRI studies in humans aimed to connect work on self-motion processing and its related cortical activation with navigational tasks activating the hippocampal formation and related structures (Sulpizio et al., 2020).

Space, time, and number in the brain

It might come as a surprise to consider space, time, and number together in a review on spatial perception. Yet, there is good reason to consider all three terms as being highly interwoven. Indeed, there is considerable evidence that space, time, and number are part of a toolkit that humans share with other non-human animals (Dehaene and Brannon, 2011). In all three domains, the nervous system must encode and compute quantities. The question arises if a common set of coding and computational mechanisms underlie quantity manipulations in all three domains? And if the different systems share similar or even the same brain circuitry? Actually, there is evidence that parietal cortex might be key to answer these questions (Bueti and Walsh, 2009).

The mental number line

The most frequently used example for the link between numbers and space is the SNARC effect (spatial numerical association of response codes). In a SNARC experiment, human participants show shorter reaction times to the left for small numbers and to the right for large numbers, when judging number-parity with button-presses using the left and right hand. In general, the SNARC effect is seen as an indication of the concept of a mental number line (MNL), i.e., a fixed link between space and numbers. More recent behavioral research even suggested an orientation of this mental number line in full 3D space, with smaller numbers being represented left, down, and near, and larger numbers being represented right, up, and far (Aleotti et al., 2020; Hesse and Bremmer, 2017).

Eye movements affect spatial, temporal, and numerical perception

We have detailed above that eye movements induce characteristic misperceptions of space. One of these illusions was a compression of perceived space. If space, time, and number were treated similarly at the neural level, saccades should also modulate temporal and numerical perception. This is, indeed, what behavioral experiments have shown. In a first set of studies, participants were asked to compare the time intervals between two pairs of extended horizontal bars while they made large horizontal saccades. The first pair was a test stimulus, with the interval between flashed bars being fixed at 100 ms, presented at unpredictably varying times relative to the saccade. The second pair was a probe stimulus of variable interval, presented 2 s after the test. Temporal perception during steady fixation was close to veridical. Briefly before a saccade, however, with the eye yet not moving, the 100 ms test interval appeared as being only 50 ms long, suggesting that subjective time had been compressed by a factor of two (Morrone et al., 2005). A similar result was found for an abstract concept of quantity. In these experiments, subjects consistently underestimated the results of rapidly computed mental additions and subtractions, when the operands were briefly displayed before a saccade (Binda et al., 2012). However, the recognition of the number symbols was unimpaired. These results are consistent with the hypothesis of a common, abstract metric encoding magnitude along multiple dimensions: space, time, and number.

Outlook

All above-described studies have been performed with the head stabilized (in human perceptual experiments) or even fixed (in animal experiments). As of today, this approach is the gold-standard, especially in neurophysiological recordings in awake, behaving monkeys. While the obtained data demonstrated an influence of eye position and eye movements on spatial perception and encoding, the underlying studies were far away from studying natural vision, which also includes head and body movements. Based on studies e.g., on mice (Stringer et al., 2019), there is good reason to assume that head and body position (and movements) affect neural activity also in primate visual vortex. Currently, in monkey neurophysiology, only a few groups world-wide go beyond the gold-standard and typically record from the head-unrestrained preparation (Sajad et al., 2020). This is mainly due to the fact, that it was only recently that first experimental tools have become available to record from freely moving larger animals, especially macaques (Yin et al., 2014). Obviously, such recordings require to measure not only eye, but also head and body position in space at considerable spatial and temporal resolution. Also here, only latest technical developments made this possible (Mathis et al., 2018). Overall, these technical (hard- and software) developments allowed to start a whole new line of research, i.e., computational neuroethology (Datta et al., 2019; Robson and Li, 2022). First experiments on macaques have been performed (Berger et al., 2020; Mao et al., 2021). It will be these and related studies in freely moving animals which eventually will allow to answer the ultimate question of systems neuroscience: how visual space is encoded in the primate brain.


Corresponding author: Frank Bremmer, Applied Physics and Neurophysics, Faculty of Physics, Philipps-Universität Marburg, Marburg, Germany; and Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Gießen, Germany, E-mail:

Funding source: Deutsche Forschungsgemeinschaft (DFG. CRC/TRR-135 Cardinal mechanisms of perception; IRTG-1901 The Brain in Action; RU-1847 Primate Systems Neuroscience)

Funding source: The Hessian Ministry of Higher Education, Research, Science, and the Arts (HMWK. Cluster project The Adaptive Mind)

About the authors

Stefan Dowiasch

Stefan Dowiasch studied physics at the University of Marburg and received his doctoral degree in 2015 for the investigation of visual perceptual stability and the processing of self-motion information using neurophysiology, psychophysics, and neuropsychology. After a Post-Doc at the University of Marburg where he investigated the temporal encoding in visual cortex during eye movements, he became the chief scientific officer and head of the software department at Thomas RECORDING GmbH. There, he developed new hard- & software products for basic research, especially in the field of neurophysiology and established two systems for early detection of Parkinson’s disease by means of non-invasive eye movement measurements as lead-PI of a BMBF-funded collaborative project. In 2022 he moved back to the University of Marburg as a scientific assistant, expanding his research focus from neurophysiological and psychophysical studies on eye movements and spatial perception towards biomarker research and their use in clinical practice and artificial neural networks.

Andre Kaminiarz

Andre Kaminiarz studied Biology at Ruhr-University Bochum and obtained his doctoral degree from the University of Marburg in 2011 for the investigation of the localization of stimuli during simulated self- and object motion conducting electrophysiological and psychophysical experiments. Since 2011 he worked first as a Post-Doc and since 2013 as scientific assistant in the group of Frank Bremmer. Since 2013 he also works as animal welfare officer at the University of Marburg.

Frank Bremmer

Frank Bremmer received a Diploma degree in Physics from the University of Marburg in 1989. In 1994, he obtained his PhD from Ruhr-University Bochum, Germany, at the Faculty of Biology in the group of Klaus-Peter Hoffmann. After spending two years at the Collège de France in Paris, France, as a PostDoc, he returned to Ruhr-University Bochum. There he obtained his Habilitation in Neurobiology in 2000. Frank Bremmer has been Professor of Applied Physics and Neurophysics at the Faculty of Physics, University of Marburg, since 2001. His research interests are in the field of systems neuroscience including the multisensory representation of space and motion and spatial perception during eye, head, and body movements. From 2004 to 2009, he was speaker of the DFG-Research Training Group (RTG) – 885 – Neuro- Act. Since 2013, he has been speaker of the DFG-funded German-Canadian IRTG/CREATE-1901-The Brain in Action. Since 2014, he has been member of the steering committee of the CRC/TRR-135 Cardinal mechanisms of perception. Since 2021 he has been Co-speaker of the Cluster project The Adaptive Mind funded by the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK). In 2008, Frank Bremmer was elected the founding director of the newly established Graduate Center for Life- and Natural Sciences at the University of Marburg. In 2010, he was elected Vice President of the University for a 3-year term. During that time, he was responsible for research, technology transfer, support of young scientists and international relations of the University of Marburg. In addition, he was appointed as member of the Executive Board of the German University Association of Advanced Graduate Training (UniWind, 4-year term), and as member of the steering committee of the Council for Doctoral Education of the European University Association (3-year term). In 2016 and 2020, he was elected Member of the German Research Foundation (DFG) Review Board 206 ‘Neurosciences.’ In 2018 he was elected the founding director of the newly established Center for Mind, Brain and Behavior, CMBB, at the Universities of Marburg and Gießen.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: Part of this work was supported by grants from Deutsche Forschungsgemeinschaft (DFG. CRC/TRR-135 Cardinal mechanisms of perception; IRTG-1901 The Brain in Action; RU-1847 Primate Systems Neuroscience) and the Hessian Ministry of Higher Education, Research, Science, and the Arts (HMWK. Cluster project The Adaptive Mind).

  3. Conflict of interest statement: The authors declare no conflicts of interest regarding this article.

References

Albright, T.D. (1984). Direction and orientation selectivity of neurons in visual area MT of the macaque. J. Neurophysiol. 52, 1106–1130, https://doi.org/10.1152/jn.1984.52.6.1106.Search in Google Scholar PubMed

Aleotti, S., Di Girolamo, F., Massaccesi, S., and Priftis, K. (2020). Numbers around descartes: A preregistered study on the three-dimensional SNARC effect. Cognition 195, 104111, https://doi.org/10.1016/j.cognition.2019.104111.Search in Google Scholar PubMed

Arcaro, M.J. and Livingstone, M.S. (2021). On the relationship between maps and domains in inferotemporal cortex. Nat. Rev. Neurosci. 22, 573–583, https://doi.org/10.1038/s41583-021-00490-4.Search in Google Scholar PubMed PubMed Central

Avillac, M., Deneve, S., Olivier, E., Pouget, A., and Duhamel, J. (2005). Reference frames for representing visual and tactile locations in parietal cortex. Nat. Neurosci. 8, 941–949, https://doi.org/10.1038/nn1480.Search in Google Scholar PubMed

Barton, J.J.S. (2011). Disorder of higher visual function. Curr. Opin. Neurol. 24, 1–5, https://doi.org/10.1097/wco.0b013e328341a5c2.Search in Google Scholar PubMed

van Beers, R.J., Wolpert, D.M., and Haggard, P. (2001). Sensorimotor integration compensates for visual localization errors during smooth pursuit eye movements. J. Neurophysiol. 85, 1914–1922, https://doi.org/10.1152/jn.2001.85.5.1914.Search in Google Scholar PubMed

Berger, M., Agha, N.S., and Gail, A. (2020). Wireless recording from unrestrained monkeys reveals motor goal encoding beyond immediate reach in frontoparietal cortex. Elife 9, e51322, https://doi.org/10.7554/elife.51322.Search in Google Scholar

Binda, P. and Morrone, M.C. (2018). Vision during saccadic eye movements. Annu. Rev. Vis. Sci. 4, 193–213, https://doi.org/10.1146/annurev-vision-091517-034317.Search in Google Scholar PubMed

Binda, P., Morrone, M.C., and Bremmer, F. (2012). Saccadic compression of symbolic numerical magnitude. PLoS One 7, e49587, https://doi.org/10.1371/journal.pone.0049587.Search in Google Scholar PubMed PubMed Central

Bremmer, F. (2000). Eye position effects in macaque area V4. Neuroreport 11, 1277–1283, https://doi.org/10.1097/00001756-200004270-00027.Search in Google Scholar PubMed

Bremmer, F. and Krekelberg, B. (2003). Seeing and acting at the same time: Challenges for brain (and) research. Neuron 38, 367–370, https://doi.org/10.1016/s0896-6273(03)00236-8.Search in Google Scholar PubMed

Bremmer, F., Pouget, A., and Hoffmann, K.-P. (1998). Eye position encoding in the macaque posterior parietal cortex. Eur. J. Neurosci. 10, 153–160, https://doi.org/10.1046/j.1460-9568.1998.00010.x.Search in Google Scholar PubMed

Bremmer, F., Schlack, A., Shah, N.J., Zafiris, O., Kubischik, M., Hoffmann, K.-P., Zilles, K., and Fink, G.R. (2001). Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron 29, 287–296, https://doi.org/10.1016/s0896-6273(01)00198-2.Search in Google Scholar PubMed

Bremmer, F., Kubischik, M., Hoffmann, K.-P., and Krekelberg, B. (2009). Neural dynamics of saccadic suppression. J. Neurosci. 29, 12374–12383, https://doi.org/10.1523/jneurosci.2908-09.2009.Search in Google Scholar PubMed PubMed Central

Bremmer, F., Kubischik, M., Pekel, M., Hoffmann, K.-P., and Lappe, M. (2010). Visual selectivity for heading in monkey area MST. Exp. Brain Res. 200, 51–60, https://doi.org/10.1007/s00221-009-1990-3.Search in Google Scholar PubMed

Bremmer, F., Churan, J., and Lappe, M. (2017). Heading representations in primates are compressed by saccades. Nat. Commun. 8, 920, https://doi.org/10.1038/s41467-017-01021-5.Search in Google Scholar PubMed PubMed Central

Bueti, D. and Walsh, V. (2009). The parietal cortex and the representation of time, space, number and other magnitudes. Philos. Trans. R. Soc. Lond. B Biol. Sci. 364, 1831–1840, https://doi.org/10.1098/rstb.2009.0028.Search in Google Scholar PubMed PubMed Central

Burr, D.C., Morrone, M.C., and Ross, J. (1994). Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 511–513, https://doi.org/10.1038/371511a0.Search in Google Scholar PubMed

Conway, B.R. (2009). Color vision, cones, and color-coding in the cortex. Neuroscientist 15, 274–290.10.1177/1073858408331369Search in Google Scholar PubMed

Datta, S.R., Anderson, D.J., Branson, K., Perona, P., and Leifer, A. (2019). Computational neuroethology: A call to action. Neuron 104, 11–24, https://doi.org/10.1016/j.neuron.2019.09.038.Search in Google Scholar PubMed PubMed Central

Dehaene, S. and Brannon, E. (2011). Space, time and number in the brain: Searching for the foundations of mathematical thought. S. Dehaene and E. Brannon, eds. (Elsevier Academic Press: London).Search in Google Scholar

Diamond, M.R., Ross, J., and Morrone, M.C. (2000). Extraretinal control of saccadic suppression. J. Neurosci. 20, 3449–3455, https://doi.org/10.1523/jneurosci.20-09-03449.2000.Search in Google Scholar

Dowiasch, S., Blohm, G., and Bremmer, F. (2016). Neural correlate of spatial (mis-)localization during smooth eye movements. Eur. J. Neurosci. 44, 1846–1855, https://doi.org/10.1111/ejn.13276.Search in Google Scholar PubMed PubMed Central

Dowiasch, S., Meyer-Stender, S., Klingenhoefer, S., and Bremmer, F. (2020). Nonretinocentric localization of successively presented flashes during smooth pursuit eye movements. J. Vis. 20, 8, https://doi.org/10.1167/jov.20.4.8.Search in Google Scholar PubMed PubMed Central

Duhamel, J., Colby, C.L., and Goldberg, M.E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science 255, 90–92, https://doi.org/10.1126/science.1553535.Search in Google Scholar PubMed

Duhamel, J., Bremmer, F., Ben Hamed, S., and Graf, W. (1997). Spatial invariance of visual receptive fields in parietal cortex neurons. Nature 389, 845–848, https://doi.org/10.1038/39865.Search in Google Scholar PubMed

Friston, K. (2018). Does predictive coding have a future? Nat. Neurosci. 21, 1019–1021, https://doi.org/10.1038/s41593-018-0200-7.Search in Google Scholar PubMed

Froehler, M.T. and Duffy, C.J. (2002). Cortical neurons encoding path and place: Where you go is where you are. Science 295, 2462–2465, https://doi.org/10.1126/science.1067426.Search in Google Scholar PubMed

Gibaldi, A. and Sabatini, S.P. (2021). The saccade main sequence revised: A fast and repeatable tool for oculomotor analysis. Behav. Res. Methods 53, 167–187, https://doi.org/10.3758/s13428-020-01388-2.Search in Google Scholar PubMed PubMed Central

Glasauer, S., Stein, A., Günther, A.L., Flanagin, V.L., Jahn, K., and Brandt, T. (2009). The effect of dual tasks in locomotor path integration. Ann. N. Y. Acad. Sci. 1164, 201–205, https://doi.org/10.1111/j.1749-6632.2009.03862.x.Search in Google Scholar PubMed

Hayhoe, M.M. and Ballard, D.H. (2005). Eye movements in natural behavior. Trends Cognit. Sci. 9, 188–194, https://doi.org/10.1016/j.tics.2005.02.009.Search in Google Scholar PubMed

Heinze, S., Narendra, A., and Cheung, A. (2018). Principles of insect path integration. Curr. Biol. 28, R1043–R1058, https://doi.org/10.1016/j.cub.2018.04.058.Search in Google Scholar PubMed PubMed Central

Helmholtz, H. (1867). Handbuch der physiologischen Optik (Leipzig: Voss).Search in Google Scholar

Hesse, P.N. and Bremmer, F. (2017). The SNARC effect in two dimensions: Evidence for a frontoparallel mental number plane. Vis. Res. 130, 85–96, https://doi.org/10.1016/j.visres.2016.10.007.Search in Google Scholar PubMed

Hesse, J.K. and Tsao, D.Y. (2020). The macaque face patch system: A turtle’s underbelly for the brain. Nat. Rev. Neurosci. 21, 695–716, https://doi.org/10.1038/s41583-020-00393-w.Search in Google Scholar PubMed

Hubel, D.H. and Wiesel, T.N. (1959). Receptive fields of single neurones in the cat’s striate cortex. J. Physiol. 148, 574–591, https://doi.org/10.1113/jphysiol.1959.sp006308.Search in Google Scholar PubMed PubMed Central

Huk, A.C., Dougherty, R.F., and Heeger, D.J. (2002). Retinotopy and functional subdivision of human areas MT and MST. J. Neurosci. 22, 7195–7205, https://doi.org/10.1523/jneurosci.22-16-07195.2002.Search in Google Scholar

Kaminiarz, A., Krekelberg, B., and Bremmer, F. (2007). Localization of visual targets during optokinetic eye movements. Vis. Res. 47, 869–878, https://doi.org/10.1016/j.visres.2006.10.015.Search in Google Scholar PubMed

Kaminiarz, A., Schlack, A., Hoffmann, K.-P., Lappe, M., and Bremmer, F. (2014). Visual selectivity for heading in the macaque ventral intraparietal area. J. Neurophysiol. 112, 2470–2480, https://doi.org/10.1152/jn.00410.2014.Search in Google Scholar PubMed

Khuvis, S., Yeagle, E.M., Norman, Y., Grossman, S., Malach, R., and Mehta, A.D. (2021). Face-selective units in human ventral temporal cortex reactivate during free recall. J. Neurosci. 41, 3386–3399, https://doi.org/10.1523/jneurosci.2918-19.2020.Search in Google Scholar PubMed PubMed Central

Konen, C.S., Kleiser, R., Wittsack, H.-J., Bremmer, F., and Seitz, R.J. (2004). The encoding of saccadic eye movements within human posterior parietal cortex. Neuroimage 22, 304–314, https://doi.org/10.1016/j.neuroimage.2003.12.039.Search in Google Scholar PubMed

Krekelberg, B., Kubischik, M., Hoffmann, K.-P., and Bremmer, F. (2003). Neural correlates of visual localization and perisaccadic mislocalization. Neuron 37, 537–545, https://doi.org/10.1016/s0896-6273(03)00003-5.Search in Google Scholar PubMed

Kunz, L., Wang, L., Lachner-Piza, D., Zhang, H., Brandt, A., Dümpelmann, M., Reinacher, P.C., Coenen, V.A., Chen, D., Wang, W.X., et al.. (2019). Hippocampal theta phases organize the reactivation of large-scale electrophysiological representations during goal-directed navigation. Sci. Adv. 5, 1–18, https://doi.org/10.1126/sciadv.aav8192.Search in Google Scholar PubMed PubMed Central

Lappe, M., Bremmer, F., Pekel, M., Thiele, A., and Hoffmann, K.-P. (1996). Optic flow processing in monkey STS: A theoretical and experimental approach. J. Neurosci. 16, 6265–6285, https://doi.org/10.1523/jneurosci.16-19-06265.1996.Search in Google Scholar

Lappe, M., Bremmer, F., and Van Den Berg, A.V.V. (1999). Perception of self motion from visual flow. Trends Cognit. Sci. 3, 329–336, https://doi.org/10.1016/s1364-6613(99)01364-9.Search in Google Scholar PubMed

Lappe, M., Awater, H., and Krekelberg, B. (2000). Postsaccadic visual references generate presaccadic compression of space. Nature 403, 892–895, https://doi.org/10.1038/35002588.Search in Google Scholar PubMed

Mao, D., Avila, E., Caziot, B., Laurens, J., Dickman, J.D., and Angelaki, D.E. (2021). Spatial modulation of hippocampal activity in freely moving macaques. Neuron 109, 3521–3534, https://doi.org/10.1016/j.neuron.2021.09.032.Search in Google Scholar PubMed PubMed Central

Markov, N.T., Ercsey-Ravasz, M., Van Essen, D.C., Knoblauch, K., Toroczkai, Z., and Kennedy, H. (2013). Cortical high-density counterstream architectures. Science 342, 578–592, https://doi.org/10.1126/science.1238406.Search in Google Scholar PubMed PubMed Central

Mathis, A., Mamidanna, P., Cury, K.M., Abe, T., Murthy, V.N., Mathis, M.W., and Bethge, M. (2018). DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 21, 1281–1289, https://doi.org/10.1038/s41593-018-0209-y.Search in Google Scholar PubMed

Matthis, J.S., Muller, K.S., Bonnen, K.L., and Hayhoe, M.M. (2022). Retinal optic flow during natural locomotion. PLoS Comput. Biol. 18, 1009575, https://doi.org/10.1371/journal.pcbi.1009575.Search in Google Scholar PubMed PubMed Central

McNaughton, B.L., Battaglia, F.P., Jensen, O., Moser, E.I., and Moser, M.-B. (2006). Path integration and the neural basis of the “cognitive map.” Nat. Rev. Neurosci. 7, 663–678, https://doi.org/10.1038/nrn1932.Search in Google Scholar PubMed

Morris, A.P., Kubischik, M., Hoffmann, K.-P., Krekelberg, B., and Bremmer, F. (2012). Dynamics of eye-position signals in the dorsal visual system. Curr. Biol. 22, 173–179, https://doi.org/10.1016/j.cub.2011.12.032.Search in Google Scholar PubMed PubMed Central

Morris, A.P., Bremmer, F., and Krekelberg, B. (2013). Eye-position signals in the dorsal visual system are accurate and precise on short timescales. J. Neurosci. 33, 12395–12406, https://doi.org/10.1523/jneurosci.0576-13.2013.Search in Google Scholar

Morris, A.P., Bremmer, F., and Krekelberg, B. (2016). The dorsal visual system predicts future and remembers past eye position. Front. Syst. Neurosci. 10, 9, https://doi.org/10.3389/fnsys.2016.00009.Search in Google Scholar PubMed PubMed Central

Morrone, M.C., Ross, J., and Burr, D.C. (2005). Saccadic eye movements cause compression of time as well as space. Nat. Neurosci. 8, 950–954, https://doi.org/10.1038/nn1488.Search in Google Scholar PubMed

Moser, E.I., Moser, M., and Mcnaughton, B. (2017). Spatial representation in the hippocampal formation: A history. Nat. Neurosci. 20, 1448–1464, https://doi.org/10.1038/nn.4653.Search in Google Scholar PubMed

Nicolas, G., Castet, E., Rabier, A., Kristensen, E., Dojat, M., and Guerin-Dugue, A. (2021). Neural correlates of intra-saccadic motion perception. J. Vis. 21, 19–24, https://doi.org/10.1167/jov.21.11.19.Search in Google Scholar PubMed PubMed Central

Noel, J.-P. and Angelaki, D.E. (2022). Cognitive, systems, and computational neurosciences of the self in motion. Annu. Rev. Psychol. 73, 103–129, https://doi.org/10.1146/annurev-psych-021021-103038.Search in Google Scholar PubMed PubMed Central

Poulter, S., Hartley, T., and Lever, C. (2018). The neurobiology of mammalian navigation. Curr. Biol. 28, R1023–R1042, https://doi.org/10.1016/j.cub.2018.05.050.Search in Google Scholar PubMed

Robson, D.N. and Li, J.M. (2022). A dynamical systems view of neuroethology: Uncovering stateful computation in natural behaviors. Curr. Opin. Neurobiol. 73, 102517, https://doi.org/10.1016/j.conb.2022.01.002.Search in Google Scholar PubMed

Ross, J., Morrone, M.C., and Burr, D.C. (1997). Compression of visual space before saccades. Nature 386, 598–601, https://doi.org/10.1038/386598a0.Search in Google Scholar PubMed

Ross, J., Morrone, M.C., Goldberg, M.E., and Burr, D.C. (2001). Changes in visual perception at the time of saccades. Trends Neurosci. 24, 113–121, https://doi.org/10.1016/s0166-2236(00)01685-4.Search in Google Scholar PubMed

Rucci, M. and Poletti, M. (2015). Control and functions of fixational eye movements. Annu. Rev. Vis. Sci. 1, 499–518, https://doi.org/10.1146/annurev-vision-082114-035742.Search in Google Scholar PubMed PubMed Central

Sajad, A., Sadeh, M., and Crawford, J.D. (2020). Spatiotemporal transformations for gaze control. Physiol. Rep. 8, e14533, https://doi.org/10.14814/phy2.14533.Search in Google Scholar PubMed PubMed Central

Schlack, A., Sterbing-D’Angelo, S.J.J., Hartung, K., Hoffmann, K.-P., and Bremmer, F. (2005). Multisensory space representations in the macaque ventral intraparietal area. J. Neurosci. 25, 4616–4625, https://doi.org/10.1523/jneurosci.0455-05.2005.Search in Google Scholar PubMed PubMed Central

Schmitt, C., Baltaretu, B.R., Crawford, J.D., and Bremmer, F. (2020). A causal role of area hMST for self-motion perception in humans. Cereb. Cortex Commun. 1, 1–14, https://doi.org/10.1093/texcom/tgaa042.Search in Google Scholar PubMed PubMed Central

Schmitt, C., Schwenk, J.C.B., Schütz, A., Churan, J., Kaminiarz, A., and Bremmer, F. (2021). Preattentive processing of visually guided self-motion in humans and monkeys. Prog. Neurobiol. 205, 102117, https://doi.org/10.1016/j.pneurobio.2021.102117.Search in Google Scholar PubMed

Stefanics, G., Heinzle, J., Horváth, A.A., and Stephan, K.E. (2018). Visual mismatch and predictive coding: A computational single-trial ERP study. J. Neurosci. 38, 4020–4030, https://doi.org/10.1523/jneurosci.3365-17.2018.Search in Google Scholar

Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C.B., Carandini, M., and Harris, K.D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, 255, https://doi.org/10.1126/science.aav7893.Search in Google Scholar PubMed PubMed Central

Sulpizio, V., Galati, G., Fattori, P., Galletti, C., and Pitzalis, S. (2020). A common neural substrate for processing scenes and egomotion-compatible visual motion. Brain Struct. Funct. 225, 2091–2110, https://doi.org/10.1007/s00429-020-02112-8.Search in Google Scholar PubMed PubMed Central

Tukker, J.J., Beed, P., Brecht, M., Kempter, R., Moser, E.I., and Schmitz, D. (2022). Microcircuits for spatial coding in the medial entorhinal cortex. Physiol. Rev. 102, 653–688, https://doi.org/10.1152/physrev.00042.2020.Search in Google Scholar PubMed PubMed Central

Ungerleider, L.G. and Mishkin, M. (1982). Two Cortical Visual Systems. Analysis of Visual Behavior. D.J. Ingle, M.A. Goodale, and R.J.W. Mansfield, eds. (Cambridge, MA: MIT Press), pp. 549–586.Search in Google Scholar

Wang, X., Fung, C.C.A., Guan, S., Wu, S., Goldberg, M.E., and Zhang, M. (2016). Perisaccadic receptive field expansion in the lateral intraparietal area. Neuron 90, 400–409, https://doi.org/10.1016/j.neuron.2016.02.035.Search in Google Scholar PubMed PubMed Central

Warren, W.H. (2019). Non-euclidean navigation. J. Exp. Biol. 222, 187971, https://doi.org/10.1242/jeb.187971.Search in Google Scholar PubMed

Yin, M., Borton, D.A., Komar, J., Agha, N., Lu, Y., Li, H., Laurens, J., Lang, Y., Li, Q., Bull, C., et al.. (2014). Wireless neurosensor for full-spectrum electrophysiology recordings during free behavior. Neuron 84, 1170–1182, https://doi.org/10.1016/j.neuron.2014.11.010.Search in Google Scholar PubMed

Published Online: 2022-10-11
Published in Print: 2022-11-25

© 2022 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 24.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/nf-2022-0019/html
Scroll to top button