In an instant, there are various visual cues that help us see the 3D world, for example binocular disparity and shading. How do these two cues interact with each other and what are the underlying neural mechanisms?
In this paper we investigate the interaction of pictorial and metric depth cues to 3D shape perception, specifically, the integration of shading and disparity information to estimate depth. After psychophysically assessing how observers combine these two cues, we evaluate the neural implementation of this ability. We use machine learning algorithms to decode neural activation, and isolate visual cortical regions associated with single cue processing as well as the integration of shading and disparity information. We find striking between-subject variability where some participants benefit from having both (congruent) shading and disparity cues, while others do not. Finally, we define a metric that demonstrates high correlation between behavioural and neural patterns of these individual levels of benefit.
Dövencioğlu D.N., Ban H., Schofield A.J., Welchman A.E. (2013). Perceptual integration for qualitatively different 3D cues in the human brain. Journal of Cognitive Neuroscience, 25, 1-25. doi: 10.1162/jocn_a_00417
How do we see 3D surfaces in 2D images even in very abstract cases such as the noisy pictures here? The left one appears like a corrugated surface where the one on the right seems like a flat striped surface.
This paper focuses on the role of learning to use low-level luminance and contrast features in layer decomposition when inferring 3D shape from shading. Expert observers benefit from second order cues to shading while inferring 3D shape, such as distinguishing a flat striped surface under diffuse lighting from a corrugated uniform surface under directional lighting. These second order luminance and contrast cues are indistinguishable by naive observers, especially at brief presentation durations. We show that these cues to shape can be learnt at a perceptual level after training. The training effects transfer partially to rigid rotations of the plaid stimuli, but they are specific to the trained spatial frequency and the combination angles of the plaids. We conclude that luminance and contrast features help the visual system decompose images into layers at relatively early stages of visual processing.
Dövencioğlu D.N., Welchman A.E., Schofield A.J. (2013). Perceptual learning of second order cues for layer decomposition. Vision Research, 77, 1-9. doi: 10.1016/j.visres.2012.11.005