Visual Perception and Image Transformations

During my postdoctoral research fellowship at the McGill Vision Research (MVR), I looked into the question "How sensitive are we to distortions in natural scenes?". This research was done using images subjected to affine and noise transformations. Affine transformations preserve lines and parallelism within an image regardless of the level of transformation. Some of these transformations, such as translation and rotation, are a natural part of our visual experience, whereas others, such as shear, are not. Human test observers were required to discriminate pairs of natural scenes in which one image was a transformed version of the other. These results allowed us to formulate a visual perceptual model that allowed us to predict when humans detected the different kinds of image transformations.

 
5249805925_3961d2a3de_z.jpg
 
Part of this research involved putting together an image database offering a large bank of images of natural scenes for human and computer vision research purposes.  The images were color corrected for visual perception experiments involving color.  The color calibration of the cameras was done with the kind guidance of Alejandro Parraga at Bristol University at the laboratory of Tom Troscianko. If curious this is the pointer to the McGill Calibrated Colour Image Database.  

The color calibrated images allowed us to model the responses of the luminance and chromatic channels of the human visual system to an image, and then use these channels to remove the soft shadows and develop a cartoon rendering algorithm.  ​

Biologically Inspired Cartoon Rendering

The overall idea is that shading appears in the luminance channel (LUM) but not in the chromatic ones, known as red-green (RG) and blue-yellow (BY). 

50b0640de4b01c11f0ece472.jpg

For example, in this image, the soft shading of the ripples of the yellow cloth mat are only present in the luminance (LUM) image plane. These properties lead to a biologically inspired algorithm able to render cartoon-like images. Overall, this body of work was inspired by the theories of color and human vision developed by my supervisor Fred Kingdom, and the work on shadow removal by M. F. TappenW. Freeman and E. H. Adelson.

5243264258_cc1b8156b1_b.jpg
 

Biologically inspired cartoon rendering, a colour vision approach Adriana Olmos, Frederick A. A. Kingdom McGill Vision Research Original climbing video courtesy of DrTopo.com Music ‘Such Great Heights’ by Postal Service

 
5243264178_8aa37d9bdf.jpg
 

Publications

Olmos, A., Kingdom, F. A. A., Field, D. J. (2004), "How sensitive are we to distortions in natural scenes?", Perception 2004, Journal of Vision, 4(8), 878a, 2004.

Olmos, A. and Kingdom, F. A. A. (2004), "Biologically inspired recovery of shading and reflectance maps in a single image", Perception 2004, volume 33, number 12, pages 1463 - 1473, 2004.​

Olmos, A. and Kingdom, F. A. A. (2005), "Automatic non-photorealistic rendering through soft-shading removal: a colour-vision approach", 2nd International Conference on Vision, Video and Graphics, Edinburgh, Scotland, July 2005.

Kingdom, F. A. A., Field, D. J. and Olmos, A. (2007), "Does spatial invariance result from insensitivity to change?", Journal of Vision, 7(14):11, 1-13, 2007.​