I’m currently trying to write an article about imagining depth in a photograph which is after all, a flat piece of paper. The task is proving to be quite difficult.
In the article I want to move beyond compiling a list of compositional techniques that help to create the illusion of depth, for example the use of converging parallel lines, overlapping objects or diminution in scale where the same sized objects look smaller the further they are away from us.
There are plenty of sources for that sort of information and the techniques can easily be incorporated into our compositions to create the illusion of depth or as it is sometimes called, to create the ‘z axis’. In the article I want to go beyond tips and techniques to say more about the way depth perception works – how our brain creates a three-dimensional image of the world based on the two-dimensional data received from each eye. That goal has led me into the realm of ‘visual intelligence’ (a term coined by Professor Donald D Hoffman at the University of California).
Although we might think we see with our eyes, it’s much more complicated than that. We actually ‘see’ with our ‘mind’s eye’. We see what our ‘visual intelligence’ constructs and we learn to do that in the first year of life, even before we can walk and talk. Fortunately our ‘visual intelligence’ relies on a set of universal rules that transcend languages and cultures. This enables the photographer and a viewer to reach a similar conclusion about the three-dimensional reality a photograph is attempting to convey even if they’ve never met or discussed the photograph, or if they speak different languages or live on opposite sides of the globe.
The problem I face with the article is condensing everything that is fascinating about the concept of ‘visual intelligence’ into a short post. I can easily write twenty pages on the subject but trying to condense all that into one or two pages is proving difficult. Nevertheless I’m getting there slowly. In the meantime I decided to do some more work on a photograph I took a year ago in Padley Woods – a place in the Peak District of England. In a way it’s related to the concept of ‘visual intelligence’ because my photograph is deliberately trying to create some visual ambiguity about the subject matter, so when someone looks at it their ‘visual intelligence’ has to work a little harder to try and make sense of the image. As a consequence, I hope the viewer is encouraged to engage with and explore the photograph in more detail (even if that only happens unconsciously).
The title of this post (‘Stones, lichen and shadows’) takes most of the ambiguity away by giving a clue to the nature of the subject matter, but without a caption or introductory description it can prove difficult to instantly make sense of parts of the image. For example, some people have interpreted the image as a picture of a pile of fossils where the leaves have left their impression in the stones, rather than a photograph of a dry-stone wall with the sun casting the shadows of fern leaves onto it. Our ‘visual intelligence’ is pretty smart but it’s not foolproof so it doesn’t always get it right. There are plenty of famous optical illusions that demonstrate that point, for example, Oscar Reutersvärd’s 1934 illusion ‘the devil’s triangle’, where our ‘visual intelligence’ is fooled into ‘seeing’ a three-dimensional triangle that can’t physically exist.
Another connection with ‘visual intelligence’ is the perception of colour and in particular our sensitivity to the colour green. Our eyes can only see three colours – red, green and blue – our brain has to do the rest to construct the wide palette of colours we can distinguish. Green can be a particularly difficult colour for landscape photographers because our camera sensors are biased to collect more green light than red or blue, also because we are particularly sensitive to green – possibly as a result of its importance in our evolution as a species when we were more reliant on the natural world for our survival, and also because the green colour in a lot of our subject matter – grass, trees etc. – represents a ‘memory colour’ – viewers ‘know’ what colour that sort of subject matter should be in a photograph.
The photograph in this post (‘Stones, lichen and shadows’) relies on the colour green quite a lot to give the image a subtle variation of interest and texture. However, the green tones vary quite a bit from yellow-green to blue-green. The Impressionist painters understood the subtle variations of colour in a predominantly green landscape scene often adding for example, blues, pinks and yellows to add visual realism to a scene that the casual observer or the untrained eye would simply describe as being green. Adjusting the ‘greens’ so they appear realistic without losing their aesthetic appeal and compositional contribution therefore formed an important part of post-processing with this image.
Working on the green areas in this image without altering other parts of the image could be difficult because of the generalised, almost random, distribution of the green across the stones and the differences in how the same colour looks (light green – dark green) depending on the localised ‘lighting’ effect created by the sunlight and shadows, and the variation in colour from yellow-green through to blue-green. Fortunately the RAW converter ‘CaptureOne’ has a neat way of constructing extremely complex ‘masks’ to separate parts of an image based on a selected colour and it does that in seconds which is really useful. So in this image I wanted to select the green colour (whether it be yellow-green, green or blue-green) and tone down the colour saturation a little. The following image is a screenshot taken with my iPhone of the mask ‘CaptureOne’ created in two seconds.
The quality of the screenshot isn’t perfect but hopefully you can see the way ‘CaptureOne’ has avoided some parts of the image but still picked up even small, isolated areas that it saw as matching the colour I’d selected as the target for creating the mask. Drawing in this mask by hand would be extremely difficult and time-consuming which is why I often use ‘CaptureOne’ as my Raw converter of choice, particularly when I anticipate the need for complex masks where I can use colour as the defining parameter. In this particular image for example, including the green in small parts of the fern leaf shadows without effecting the entire shadow, or including parts of stones without damaging the colour integrity of the rest of the stone colours would be a major challenge to first identify all the green (pixel by pixel) and then to draw the mask by hand.
I find the colour-masking ability of ‘CaptureOne’ particularly useful in landscape photography. I’ve also used it to select parts of tree trunks on silver birch trees where I want to alter colour, contrast, sharpness, exposure without having to hand-draw complex masks. If you haven’t seen the software and use masks in post-processing then I would suggest having a look at the software particularly since there is a trial version available.
Anyway, time for me to stop and get back to the article I’m trying to put together on the perception of depth/’visual intelligence’ in photography. In particular the way ‘visual intelligence’ helps a viewer of a photograph to reach a shared perception with the photographer of the three-dimensional world represented in the photograph even though the location where it was taken has been flattened onto a sheet of paper or a monitor screen, and the viewer has never been there.