We can effectively navigate while driving through heavy fog, hard rain, or heat haze by separating these atmospheric effects from the optic flow caused by our own motion through the environment. How do we see objects through transparent layers?
Layer decomposition has been traditionally studied in luminance defined uniform scenes with a focus on t-junctions, but real life situations are more complex, e.g. driving through rain with drops in the windshield. Our lexicon is also very rich to describe the complex transparent layers as you can see from the word cloud that shows the results of a short brain storming session with our participants. Here we use complex 3D renderings to understand perceived transparent layers such as water and glass. To do this I created diffuse and glossy glavens and used the Eidolon Factory to deform images of these objects. We use local image deformations and two simple parameters to describe and classify perceptual adjustments for a rigid (glass) and a nonrigid (water) transparent layer. We also show clusters of image deformations that convey a transparent layer of water in natural images of textures.
Dövencioğlu D.N., van Doorn A., Koenderink J., Doerschner, K. (2018). Seeing through transparent layers. Journal of Vision, 18(25), doi:10.1167/18.9.25