I am a Cognitive Neuroscientist trying to understand how the visual system works. I am originally from Denmark, but have lived and worked in the US since 2007. I received my PhD in Cognitive Neuroscience in 2013 from Dartmouth College, and since then I have been a post-doctoral scholar at Stanford University, working with Professor Tony Norcia.

My work focuses on the domain of mid-level visual processing, which begins in primary visual cortex ~100 ms after stimulus onset, and then unfolds over the next several hundred ms, in several, mostly topographically organized visual areas. In this deceptively short time-span, the visual system infers information about the shape, location and movement of the elements in the visual world, but also resolves the perceptual organization of the scene: figure-ground relationships, perceptual grouping, constancy operations and much more. These distinct classes of information are encoded by separate neural populations, but are also deeply interdependent, and in many cases represented at multiple stages of visual processing. This means that the basic representation of the visual scene, which provides the foundation for all higher-level vision and acting in the world, is in fact instantiated in a complex and inter-related network of brain areas. I use psychophysics, EEG and functional MRI to probe this network and enhance our understanding of the visual brain as an information processing machine and generator of our vivid experience of the world. My work builds on ideas going back as far as the Gestalt psychologists of the early 20th century, but has direct implications for cutting-edge applications in computer vision and the treatment of visual and neurological disorders.

Academic Appointments

All Publications

  • Revisiting the functional significance of binocular cues for perceiving motion-in-depth. Nature communications Kohler, P. J., Meredith, W. J., Norcia, A. M. 2018; 9 (1): 3511


    Binocular differencing of spatial cues required for perceiving depth relationships is associated with decreased sensitivity to the corresponding retinal image displacements. However, binocular summation of contrast signals increases sensitivity. Here, we investigated this divergence in sensitivity by making direct neural measurements of responses to suprathreshold motion in human adults and 5-month-old infants using steady-state visually evoked potentials. Interocular differences in retinal image motion generated suppressed response functions and correspondingly elevated perceptual thresholds compared to motion matched between the two eyes. This suppression was of equal strength for horizontal and vertical motion and therefore not specific to the perception of motion-in-depth. Suppression is strongly dependent on the presence of spatial references in the image and highly immature in infants. Suppression appears to be the manifestation of a succession of spatial and interocular opponency operations that occur at an intermediate processing stage either before or in parallel with the extraction of motion-in-depth.

    View details for DOI 10.1038/s41467-018-05918-7

    View details for PubMedID 30158523

  • Measuring Integration Processes in Visual Symmetry with Frequency-Tagged EEG SCIENTIFIC REPORTS Alp, N., Kohler, P., Kogo, N., Wagemans, J., Norcia, A. 2018; 8: 6969


    Symmetry is a highly salient feature of the natural world which requires integration of visual features over space. The aim of the current work is to isolate dynamic neural correlates of symmetry-specific integration processes. We measured steady-state visual evoked potentials (SSVEP) as participants viewed symmetric patterns comprised of distinct spatial regions presented at two different frequencies (f1 and f2). We measured intermodulation components, shown to reflect non-linear processing at the neural level, indicating integration of spatially separated parts of the pattern. We generated a wallpaper pattern containing two reflection symmetry axes by tiling the plane with a two-fold reflection symmetric unit-pattern and split each unit-pattern diagonally into separate parts which could be presented at different frequencies. We compared SSVEPs measured for wallpapers and control patterns for which both images were equal in terms of translation and rotation symmetry but reflection symmetry could only emerge for the wallpaper pattern through integration of the image-pairs. We found that low-frequency intermodulation components differed between the wallpaper and control stimuli, indicating the presence of integration mechanisms specific to reflection symmetry. These results showed that spatial integration specific to symmetry perception can be isolated through a combination of stimulus design and the frequency tagging approach.

    View details for DOI 10.1038/s41598-018-24513-w

    View details for Web of Science ID 000431291500021

    View details for PubMedID 29725022

    View details for PubMedCentralID PMC5934372

  • Dynamics of perceptual decisions about symmetry in visual cortex NEUROIMAGE Kohler, P. J., Cottereau, B. R., Norcia, A. M. 2018; 167: 316–30


    Neuroimaging studies have identified multiple extra-striate visual areas that are sensitive to symmetry in planar images (Kohler et al., 2016; Sasaki et al., 2005). Here, we investigated which of these areas are directly involved in perceptual decisions about symmetry, by recording high-density EEG in participants (n = 25) who made rapid judgments about whether an exemplar image contained rotation symmetry or not. Stimulus-locked sensor-level analysis revealed symmetry-specific activity that increased with increasing order of rotation symmetry. Response-locked analysis identified activity occurring between 600 and 200 ms before the button-press, that was directly related to perceptual decision making. We then used fMRI-informed EEG source imaging to characterize the dynamics of symmetry-specific activity within an extended network of areas in visual cortex. The most consistent cortical source of the stimulus-locked activity was VO1, a topographically organized area in ventral visual cortex, that was highly sensitive to symmetry in a previous study (Kohler et al., 2016). Importantly, VO1 activity also contained a strong decision-related component, suggesting that this area plays a crucial role in perceptual decisions about symmetry. Other candidate areas, such as lateral occipital cortex, had weak stimulus-locked symmetry responses and no evidence of correlation with response timing.

    View details for DOI 10.1016/j.neuroimage.2017.11.051

    View details for Web of Science ID 000427529200029

    View details for PubMedID 29175495

  • Distinct Representations of Magnitude and Spatial Position within Parietal Cortex during Number-Space Mapping JOURNAL OF COGNITIVE NEUROSCIENCE Kanayet, F. J., Mattarella-Micke, A., Kohler, P. J., Norcia, A. M., McCandliss, B. D., McClelland, J. L. 2018; 30 (2): 200–218


    Mapping numbers onto space is foundational to mathematical cognition. These cognitive operations are often conceptualized in the context of a "mental number line" and involve multiple brain regions in or near the intraparietal sulcus (IPS) that have been implicated both in numeral and spatial cognition. Here we examine possible differentiation of function within these brain areas in relating numbers to spatial positions. By isolating the planning phase of a number line task and introducing spatiotopic mapping tools from fMRI into mental number line task research, we are able to focus our analysis on the neural activity of areas in anterior IPS (aIPS) previously associated with number processing and on spatiotopically organized areas in and around posterior IPS (pIPS), while participants prepare to place a number on a number line. Our results support the view that the nonpositional magnitude of a numerical symbol is coded in aIPS, whereas the position of a number in space is coded in posterior areas of IPS. By focusing on the planning phase, we are able to isolate activation related to the cognitive, rather than the sensory-motor, aspects of the task. Also, to allow the separation of spatial position from magnitude, we tested both a standard positive number line (0 to 100) and a zero-centered mixed number line (-100 to 100). We found evidence of a functional dissociation between aIPS and pIPS: Activity in aIPS was associated with a landmark distance effect not modulated by spatial position, whereas activity in pIPS revealed a contralateral preference effect.

    View details for DOI 10.1162/jocn_a_01199

    View details for Web of Science ID 000419005300006

    View details for PubMedID 29040015

  • Evidence for long-range spatiotemporal interactions in infant and adult visual cortex JOURNAL OF VISION Norcia, A. M., Pei, F., Kohler, P. J. 2017; 17 (6): 12


    The development of spatiotemporal interactions giving rise to classical receptive field properties has been well studied in animal models, but little is known about the development of putative nonclassical mechanisms in any species. Here we used visual evoked potentials to study the developmental status of spatiotemporal interactions for stimuli that were biased to engage long-range spatiotemporal integration mechanisms. We compared responses to widely spaced stimuli presented either in temporal succession or at the same time. The former configuration elicits a percept of apparent motion in adults but the latter does not. Component flash responses were summed to make a linear prediction (no spatiotemporal interaction) for comparison with the measured evoked responses to sequential or simultaneous flash conditions. In adults, linear summation of the separate flash responses measured with 40% contrast stimuli predicted sequential flash responses twice as large as those measured, indicating that the response measured under apparent motion conditions is subadditive. Simultaneous-flash responses at the same spatial separation were also subadditive, but substantially less so. The subadditivity in both cases could be modeled as a simple multiplicative gain term across all electrodes and time points. In infants aged 3-8 months, responses to the stimuli used in adults were similar to their linear predictions at 40%, but the responses measured at 80% contrast resembled the subadditive responses of the adults for both sequential and simultaneous flash conditions. We interpret the developmental data as indicating that adult-like long-range spatiotemporal interactions can be demonstrated by 3-8 months, once stimulus contrast is high enough.

    View details for DOI 10.1167/17.6.12

    View details for Web of Science ID 000405348800012

    View details for PubMedID 28622700

    View details for PubMedCentralID PMC5477630

  • Motion-Induced Position Shifts Activate Early Visual Cortex FRONTIERS IN NEUROSCIENCE Kohler, P. J., Cavanagh, P., Tse, P. U. 2017; 11: 168


    The ability to correctly determine the position of objects in space is a fundamental task of the visual system. The perceived position of briefly presented static objects can be influenced by nearby moving contours, as demonstrated by various illusions collectively known as motion-induced position shifts. Here we use a stimulus that produces a particularly strong effect of motion on perceived position. We test whether several regions-of-interest (ROIs), at different stages of visual processing, encode the perceived rather than retinotopically veridical position. Specifically, we collect functional MRI data while participants experience motion-induced position shifts and use a multivariate pattern analysis approach to compare the activation patterns evoked by illusory position shifts with those evoked by matched physical shifts. We find that the illusory perceived position is represented at the earliest stages of the visual processing stream, including primary visual cortex. Surprisingly, we found no evidence of percept-based encoding of position in visual areas beyond area V3. This result suggests that while it is likely that higher-level visual areas are involved in position encoding, early visual cortex also plays an important role.

    View details for DOI 10.3389/fnins.2017.00168

    View details for Web of Science ID 000398422600001

    View details for PubMedID 28420952

    View details for PubMedCentralID PMC5376622

  • Representation of Maximally Regular Textures in Human Visual Cortex JOURNAL OF NEUROSCIENCE Kohler, P. J., Clarke, A., Yakovleva, A., Liu, Y., Norcia, A. M. 2016; 36 (3): 714–29


    Naturalistic textures with an intermediate degree of statistical regularity can capture key structural features of natural images (Freeman and Simoncelli, 2011). V2 and later visual areas are sensitive to these features, while primary visual cortex is not (Freeman et al., 2013). Here we expand on this work by investigating a class of textures that have maximal formal regularity, the 17 crystallographic wallpaper groups (Fedorov, 1891). We used texture stimuli from four of the groups that differ in the maximum order of rotation symmetry they contain, and measured neural responses in human participants using functional MRI and high-density EEG. We found that cortical area V3 has a parametric representation of the rotation symmetries in the textures that is not present in either V1 or V2, the first discovery of a stimulus property that differentiates processing in V3 from that of lower-level areas. Parametric responses were also seen in higher-order ventral stream areas V4, VO1, and lateral occipital complex (LOC), but not in dorsal stream areas. The parametric response pattern was replicated in the EEG data, and source localization indicated that responses in V3 and V4 lead responses in LOC, which is consistent with a feedforward mechanism. Finally, we presented our stimuli to four well developed feedforward models and found that none of them were able to account for our results. Our results highlight structural regularity as an important stimulus dimension for distinguishing the early stages of visual processing, and suggest a previously unrecognized role for V3 in the visual form-processing hierarchy. Significance statement: Hierarchical processing is a fundamental organizing principle in visual neuroscience, with each successive processing stage being sensitive to increasingly complex stimulus properties. Here, we probe the encoding hierarchy in human visual cortex using a class of visual textures--wallpaper patterns--that are maximally regular. Through a combination of fMRI and EEG source imaging, we find specific responses to texture regularity that depend parametrically on the maximum order of rotation symmetry in the textures. These parametric responses are seen in several areas of the ventral visual processing stream, as well as in area V3, but not in V1 or V2. This is the first demonstration of a stimulus property that differentiates processing in V3 from that of lower-level visual areas.

    View details for DOI 10.1523/JNEUROSCI.2962-15.2016

    View details for Web of Science ID 000368355100008

    View details for PubMedID 26791203

  • Motion-induced position shifts are influenced by global motion, but dominated by component motion VISION RESEARCH Kohler, P. J., Cavanagh, P., Tse, P. U. 2015; 110: 93-99


    Object motion and position have long been thought to involve largely independent visual computations. However, the motion-induced position shift (Eagleman & Sejnowski, 2007) shows that the perceived position of a briefly presented static object can be influenced by nearby moving contours. Here we combine a particularly strong example of this illusion with a bistable global motion stimulus to compare the relative effects of global and component motion on the shift in perceived position. We used a horizontally oscillating diamond (Lorenceau & Shiffrar, 1992) that produces two possible global directions (left and right when fully visible versus up and down when vertices are occluded by vertical bars) as well as the oblique component motion orthogonal to each contour. To measure the motion-induced shift we flashed a test dot on the contour as the diamond reversed direction (Cavanagh & Anstis, 2013). Although the global motion had a highly significant influence on the direction and size of the motion-induced position shift, the perceived displacement of the probe was closer to the direction of the component motion. These findings show that while global motion can clearly influence position shifts, it is the component motion that dominates in setting the position shift. This is true even though the perceived motion is in the global direction and the component motion is not consciously experienced. This suggests that perceived position is influenced by motion signals that arise earlier in time or earlier in processing compared to the stage at which the conscious experience of motion is determined.

    View details for DOI 10.1016/j.visres.2015.03.003

    View details for Web of Science ID 000354149100011

    View details for PubMedID 25782364

  • The artist emerges: Visual art learning alters neural structure and function NEUROIMAGE Schlegel, A., Alexander, P., Fogelson, S. V., Li, X., Lu, Z., Kohler, P. J., Riley, E., Tse, P. U., Meng, M. 2015; 105: 440-451


    How does the brain mediate visual artistic creativity? Here we studied behavioral and neural changes in drawing and painting students compared to students who did not study art. We investigated three aspects of cognition vital to many visual artists: creative cognition, perception, and perception-to-action. We found that the art students became more creative via the reorganization of prefrontal white matter but did not find any significant changes in perceptual ability or related neural activity in the art students relative to the control group. Moreover, the art students improved in their ability to sketch human figures from observation, and multivariate patterns of cortical and cerebellar activity evoked by this drawing task became increasingly separable between art and non-art students. Our findings suggest that the emergence of visual artistic skills is supported by plasticity in neural pathways that enable creative cognition and mediate perceptuomotor integration.

    View details for DOI 10.1016/j.neuroimage.2014.11.014

    View details for Web of Science ID 000346050300040

    View details for PubMedID 25463452

  • Extrastriate Visual Areas Integrate Form Features over Space and Time to Construct Representations of Stationary and Rigidly Rotating Objects. Journal of cognitive neuroscience McCarthy, J. D., Kohler, P. J., Tse, P. U., Caplovitz, G. P. 2015: 1–17


    When an object moves behind a bush, for example, its visible fragments are revealed at different times and locations across the visual field. Nonetheless, a whole moving object is perceived. Unlike traditional modal and amodal completion mechanisms known to support spatial form integration when all parts of a stimulus are simultaneously visible, relatively little is known about the neural substrates of the spatiotemporal form integration (STFI) processes involved in generating coherent object representations from a succession visible fragments. We use fMRI to identify brain regions involved in two mechanisms supporting the representation of stationary and rigidly rotating objects whose form features are shown in succession: STFI and position updating. STFI allows past and present form cues to be integrated over space and time into a coherent object even when the object is not visible in any given frame. STFI can occur whether or not the object is moving. Position updating allows us to perceive a moving object, whether rigidly rotating or translating, even when its form features are revealed at different times and locations in space. Our results suggest that STFI is mediated by visual regions beyond V1 and V2. Moreover, although widespread cortical activation has been observed for other motion percepts derived solely from form-based analyses [Tse, P. U. Neural correlates of transformational apparent motion. Neuroimage, 31, 766-773, 2006; Krekelberg, B., Vatakis, A., & Kourtzi, Z. Implied motion from form in the human visual cortex. Journal of Neurophysiology, 94, 4373-4386, 2005], increased responses for the position updating that lead to rigidly rotating object representations were only observed in visual areas KO and possibly hMT+, indicating that this is a distinct and highly specialized type of processing.

    View details for DOI 10.1162/jocn_a_00850

    View details for PubMedID 26226075

  • Unconscious neural processing differs with method used to render stimuli invisible FRONTIERS IN PSYCHOLOGY Fogelson, S. V., Kohler, P. J., Miller, K. J., Granger, R., Tse, P. U. 2014; 5


    Visual stimuli can be kept from awareness using various methods. The extent of processing that a given stimulus receives in the absence of awareness is typically used to make claims about the role of consciousness more generally. The neural processing elicited by a stimulus, however, may also depend on the method used to keep it from awareness, and not only on whether the stimulus reaches awareness. Here we report that the method used to render an image invisible has a dramatic effect on how category information about the unseen stimulus is encoded across the human brain. We collected fMRI data while subjects viewed images of faces and tools, that were rendered invisible using either continuous flash suppression (CFS) or chromatic flicker fusion (CFF). In a third condition, we presented the same images under normal fully visible viewing conditions. We found that category information about visible images could be extracted from patterns of fMRI responses throughout areas of neocortex known to be involved in face or tool processing. However, category information about stimuli kept from awareness using CFS could be recovered exclusively within occipital cortex, whereas information about stimuli kept from awareness using CFF was also decodable within temporal and frontal regions. We conclude that unconsciously presented objects are processed differently depending on how they are rendered subjectively invisible. Caution should therefore be used in making generalizations on the basis of any one method about the neural basis of consciousness or the extent of information processing without consciousness.

    View details for DOI 10.3389/fpsyg.2014.00601

    View details for Web of Science ID 000338669100001

    View details for PubMedCentralID PMC4058905

  • The global slowdown effect: Why does perceptual grouping reduce perceived speed? ATTENTION PERCEPTION & PSYCHOPHYSICS Kohler, P. J., Caplovitz, G. P., Tse, P. U. 2014; 76 (3): 780-792


    The percept of four rotating dot pairs is bistable. The "local percept" is of four pairs of dots rotating independently. The "global percept" is of two large squares translating over one another (Anstis & Kim 2011). We have previously demonstrated (Kohler, Caplovitz, & Tse 2009) that the global percept appears to move more slowly than the local percept. Here, we investigate and rule out several hypotheses for why this may be the case. First, we demonstrate that the global slowdown effect does not occur because the global percept is of larger objects than the local percept. Second, we show that the global slowdown effect is not related to rotation-specific detectors that may be more active in the local than in the global percept. Third, we find that the effect is also not due to a reduction of image elements during grouping and can occur with a stimulus very different from the one used previously. This suggests that the effect may reflect a general property of perceptual grouping. Having ruled out these possibilities, we suggest that the global slowdown effect may arise from emergent motion signals that are generated by the moving dots, which are interpreted as the ends of "barbell bars" in the local percept or the corners of the illusory squares in the global percept. Alternatively, the effect could be the result of noisy sources of motion information that arise from perceptual grouping that, in turn, increase the influence of Bayesian priors toward slow motion (Weiss, Simoncelli, & Adelson 2002).

    View details for DOI 10.3758/s13414-013-0607-x

    View details for Web of Science ID 000334521300013

    View details for PubMedID 24448695

  • Pattern classification precedes region-average hemodynamic response in early visual cortex NEUROIMAGE Kohler, P. J., Fogelson, S. V., Reavis, E. A., Meng, M., Guntupalli, J. S., Hanke, M., Halchenko, Y. O., Connolly, A. C., Haxby, J. V., Tse, P. U. 2013; 78: 249-260


    How quickly can information about the neural response to a visual stimulus be detected in the hemodynamic response measured using fMRI? Multi-voxel pattern analysis (MVPA) uses pattern classification to detect subtle stimulus-specific information from patterns of responses among voxels, including information that cannot be detected in the average response across a given brain region. Here we use MVPA in combination with rapid temporal sampling of the fMRI signal to investigate the temporal evolution of classification accuracy and its relationship to the average regional hemodynamic response. In primary visual cortex (V1) stimulus information can be detected in the pattern of voxel responses more than a second before the average hemodynamic response of V1 deviates from baseline, and classification accuracy peaks before the peak of the average hemodynamic response. Both of these effects are restricted to early visual cortex, with higher level areas showing no difference or, in some cases, the opposite temporal relationship. These results have methodological implications for fMRI studies using MVPA because they demonstrate that information can be decoded from hemodynamic activity more quickly than previously assumed.

    View details for DOI 10.1016/j.neuroimage.2013.04.019

    View details for Web of Science ID 000320488900025

    View details for PubMedID 23587693

  • Effects of attention on visual experience during monocular rivalry VISION RESEARCH Reavis, E. A., Kohler, P. J., Caplovitz, G. P., Wheatley, T. P., Tse, P. U. 2013; 83: 76-81


    There is a long-running debate over the extent to which volitional attention can modulate the appearance of visual stimuli. Here we use monocular rivalry between afterimages to explore the effects of attention on the contents of visual experience. In three experiments, we demonstrate that attended afterimages are seen for longer periods, on average, than unattended afterimages. This occurs both when a feature of the afterimage is attended directly and when a frame surrounding the afterimage is attended. The results of these experiments show that volitional attention can dramatically influence the contents of visual experience.

    View details for DOI 10.1016/j.visres.2013.03.002

    View details for Web of Science ID 000318202300009

    View details for PubMedID 23499978

  • Network structure and dynamics of the mental workspace Proceedings of the National Academy of Sciences Schlegel, A., Kohler, P. J., Fogelson, S. V., Alexander, P., Konuthula, D., Tse, P. U. 2013; 110 (40): 16277-16282
  • Associations between auditory pitch and visual elevation do not depend on language: Evidence from a remote population PERCEPTION Parkinson, C., Kohler, P. J., Sievers, B., Wheatley, T. 2012; 41 (7): 854-861


    Associations between auditory pitch and visual elevation are widespread in many languages, and behavioral associations have been extensively documented between height and pitch among speakers of those languages. However, it remains unclear whether perceptual correspondences between auditory pitch and visual elevation inform these linguistic associations, or merely reflect them. We probed this cross-modal mapping in members of a remote Kreung hill tribe in northeastern Cambodia who do not use spatial language to describe pitch. Participants viewed shapes rising or falling in space while hearing sounds either rising or falling in pitch, and reported on the auditory change. Associations between pitch and vertical position in the Kreung were similar to those demonstrated in populations where pitch is described in terms of spatial height. These results suggest that associations between visual elevation and auditory pitch can arise independently of language. Thus, widespread linguistic associations between pitch and elevation may reflect universally predisposed perceptual correspondences.

    View details for DOI 10.1068/p7225

    View details for Web of Science ID 000310184600008

    View details for PubMedID 23155736

  • Rotational and translational motion interact independently with form VISION RESEARCH Porter, K. B., Caplovitz, G. P., Kohler, P. J., Ackerman, C. M., Tse, P. U. 2011; 51 (23-24): 2478-2487


    Do the mechanisms that underlie the perception of translational and rotational object motion show evidence of independent processing? By probing the perceived speed of translating and/or rotating objects, we find that an object's form contributes in independent ways to the processing of translational and rotational motion: In the context of translational motion, it has been shown that the more elongated an object is along its direction of motion, the faster it is perceived to translate; in the context of rotational motion, it has been shown that the sharper the maxima of curvature along an object's contour, the faster it appears to rotate. Here we demonstrate that such rotational form-motion interactions are due solely to the rotational component of combined rotational and translational motion. We conclude that the perception of rotational motion relies on form-motion interactions that are independent of the processing underlying translational motion.

    View details for DOI 10.1016/j.visres.2011.10.005

    View details for Web of Science ID 000297907600017

    View details for PubMedID 22024049

  • Motion fading is driven by perceived, not actual angular velocity VISION RESEARCH Kohler, P. J., Caplovitz, G. P., Hsieh, P., Sun, J., Tse, P. U. 2010; 50 (11): 1086-1094


    After prolonged viewing of a slowly drifting or rotating pattern under strict fixation, the pattern appears to slow down and then momentarily stop. Here we examine the relationship between such 'motion fading' and perceived angular velocity. Using several different dot patterns that generate emergent virtual contours, we demonstrate that whenever there is a difference in the perceived angular velocity of two patterns of dots that are in fact rotating at the same angular velocity, there is also a difference in the time to undergo motion fading for those two patterns. Conversely, whenever two patterns show no difference in perceived angular velocity, even if in fact rotating at different angular velocities, we find no difference in the time to undergo motion fading. Thus, motion fading is driven by the perceived rather than actual angular velocity of a rotating stimulus.

    View details for DOI 10.1016/j.visres.2010.03.023

    View details for Web of Science ID 000278071800010

    View details for PubMedID 20371254

  • The whole moves less than the spin of its parts ATTENTION PERCEPTION & PSYCHOPHYSICS Kohler, P. J., Caplovitz, G. P., Tse, P. U. 2009; 71 (4): 675-679


    When individually moving elements in the visual scene are perceptually grouped together into a coherently moving object, they can appear to slow down. In the present article, we show that the perceived speed of a particular global-motion percept is not dictated completely by the speed of the local moving elements. We investigated a stimulus that leads to bistable percepts, in which local and global motion may be perceived in an alternating fashion. Four rotating dot pairs, when arranged into a square-like configuration, may be perceived either locally, as independently rotating dot pairs, or globally, as two large squares translating along overlapping circular trajectories. Using a modified version of this stimulus, we found that the perceptually grouped squares appeared to move more slowly than the locally perceived rotating dot pairs, suggesting that perceived motion magnitude is computed following a global analysis of form. Supplemental demos related to this article can be downloaded from

    View details for DOI 10.3758/APP.71.4.675

    View details for Web of Science ID 000266258100002

    View details for PubMedID 19429950

  • Therapeutic effects of a restraint procedure on posttraumatic place learning in fimbria-fornix transected rats BRAIN RESEARCH Mala, H., Castro, M. R., Knippel, J., Kohler, P. J., Lassen, P., Moensen, J. 2008; 1217: 221-231


    Restraint procedures have been shown to influence the neural processes in the brain (dendritic changes or changes in the expression of neurotrophines, etc.) as well as to alter the behavioural performance. While many report deleterious effects of this procedure in normal animals, there are also indications of positive effects in the context of brain injury. In order to address the issue from the perspective of functional posttraumatic recovery, we studied 6 experimental groups of rats--3 groups undergoing a fimbria-fornix transection, and 3 groups remaining neurally intact. Within the lesioned and intact groups, respectively, one group of animals was subjected to an 8-day long restraint procedure (2 h daily) that ended immediately prior to the infliction of trauma; another group was subjected to the same procedure starting immediately after the infliction of trauma; and one group was not subjected to the restraint procedure at all. After a brief period of postoperative pause, the animals were tested on their acquisition of an 8-arm radial maze based place learning task and the effects of the restraint procedure on the task acquisition were evaluated. The results show that within the neurally intact groups, the administration of this procedure had no effect at all. However, the lesioned groups that were subjected to the restraint procedure showed significantly improved acquisition of the studied task compared to the lesioned animals that did not undergo the restraint procedure. The improved task performance suggests a therapeutic effect of this manipulation on the functional recovery after a mechanical trauma.

    View details for DOI 10.1016/j.brainres.2008.04.005

    View details for Web of Science ID 000257636300023

    View details for PubMedID 18501337