35

How does the human vision system perceive color, and how can/should this be taken into account when taking and post-processing photographs (both in color and in black and white)?


The bit above is the question I'm asking; below are some follow-on musings exploring some of the question-space.

How do the color receptors in our eyes relate to the RGB used in Bayer (and Foveon) sensors and in RGB color spaces? Do the R, G, and B primary filters used correspond directly to the different types of cone cells in the retina? If not, why not?

How is the color response of the eye interpreted in the brain? How do those three (overlapping!) wavelength-responses get translated into a full range of hues?

Are there certain areas within the color space which we perceive where we can distinguish more close-together gradients of color? Are there areas where we effectively have "blind spots" within the spectrum — areas of low discrimination even though the wavelengths of light vary significantly? How do film and camera sensors respond in these same areas, and are there pitfalls or features which can be exploited due to this?

How much does physiological color perception (the specificity of discrimination mentioned above, in particular) vary from person to person? Leaving aside color blindness, are our cone cells all tuned to exactly the same frequency? How important are differences in this area to overall color perception?

What is the mechanism by which our internal "auto white balance" works? (Is it based on learned knowledge about the way things should look, or is physiological?)

When we look at a black and white image, how does our memory of color affect our interpretation?


I am aware of and have read the Wikipedia article on human vision and on cone cells and some of the related articles one gets from following the wiki links. A summary of the basics is fine in the answers, but I'm really looking for aspects that are interesting for photography.


D. Lambert adds in a comment to an answer below:

Ok, so this is a pretty good biological introduction, but how do we, as photographers, make use of this info? Do we boost blues in our photos to compensate for low "S" counts? Is there something we should be doing to take advantage of the extra sensitivity for greens? Maybe there's something about the way our brains process color that accounts for the appeal of B&W photos in some cases. Is anyone aware of any work in this direction?

which is exactly the sort of thing I'm trying to get at with this question.


I found this quote to be interesting:

Our brains generate the colors we see for reasons of biological advantage, just as brains make up the qualities of all our other perceptions. If you have doubts about this assertion, consider the perception of pain. The sensation we perceive when we accidentally touch a hot stove is not a feature of the world but a sensory quality that leads to useful behavior. — Dale Purves, Brains: How they seem to work, FT Press, 2010

When we take a color photograph, we are working with that sensory quality in a unique way, different from how a sculptor or even a painter works. How can awareness of this be used in creation or appreciation of photographs?

mattdm
  • 143,140
  • 52
  • 417
  • 741
  • 1
    I think your latest questions would be be served by a biology book. And the answers better be compiled into a Wikipedia-article. – Leonidas Feb 01 '11 at 14:37
  • 7
    A biology book — or Wikipedia — is unlikely to view the questions from the specific angle of photography. – mattdm Feb 01 '11 at 14:45
  • Additionally, "this is in a book somewhere" is the answer to 99.9% of questions on all Stack Exchange sites. – mattdm Feb 01 '11 at 14:46
  • Interesting article in last week's Nature about a third type of photoreceptor in the eye: http://www.nature.com/news/2011/110119/full/469284a.html – AJ Finch Feb 01 '11 at 14:49
  • 1
    I think this (and related questions), while not of interest to all photographers, are definitely relevant to the appreciation, theory and practice of photography. – AJ Finch Feb 01 '11 at 14:50
  • I think this is at the same time a highly specific and extremely broad question concerning biology and expecially neurology. The application to current photography is nonexistant, else that "we can look at pictures, see". The question can't be answered straight because of the neuronal back-processing (look for the perception/learned knowledge-subquestions) and the answer here is "This topic spans entire books, please work your way through them, if you really are interested." – Leonidas Feb 01 '11 at 15:05
  • Some of the other questions on human vision were appropriate... but this one doesn't even mention photography in it. As worded it's strictly a biology question... and thus doesn't belong here. – Craig Walker Feb 01 '11 at 15:13
  • Craig, I mention camera sensors in the second sentence. – mattdm Feb 01 '11 at 15:15
  • 5
    And frankly, I find the suggestion that perception of color does not relate to be photography to be mind-boggling crazy. – mattdm Feb 01 '11 at 15:17
  • Please take a look at the targeted perception-subquestion again. How does the knowledge of non-faulty variance in colour perception relates to photography? How does the knowledge that π people physically perceive blue more distinctive than you and half of them perceive it "overall" warmer than the other aid you in taking or post-processing photos? Insults do not easily replace arguments. – Leonidas Feb 01 '11 at 15:26
  • I partially agree that this question is on the edge at best. However, recently this forum is not quite flooded with strictly on-topic questions. Therefore, I don't see why we cannon afford an intelligent discussion here - while participants try to keep the relation to photography as much as possible. @mattdm, you are encouraged to reword the question to steer it more to the photography side. Everybody should remember that there is no real waste of forum bandwidth here, since those who are not interested just skip the discussion allover. – ysap Feb 01 '11 at 15:26
  • 1
    @Leonidas: general perception of relative warmth of color is useful to know when color-adjusting one's own photos, don't you think? One can color-calibrate one's equipment, but what about one's eyes? And where color-calibration of equipment is based on a person's perception ("make this look neutral" / "are these colors the same?"), is it even possible for one person's very careful assessment to be generally right, or does individual variation make enough difference that a sample of different people's perceptions should be used? – mattdm Feb 01 '11 at 16:04
  • 2
    These questions have all been flagged as off topic, however I am not sure that is true. Vision is a key factor in every photographers work, and while not everyone may be interested in these topics, many of us are. I think it is relevant discussion, especially given that we have a lot of technical and science types on these forums. The questions do specifically relate to photography, people are answering them, and there are no votes to close. – jrista Feb 01 '11 at 16:10
  • Hmmm, though — unless someone answers that last bit about black and white photography, I'm inclined to pull that out and make it a separate question. – mattdm Feb 01 '11 at 16:10
  • @mattdm: Don't we use measuring instruments because of subjectivity in perception instead of people? Because two judges may neither see (even the adaption of the eye is not a linear function) nor perceive a colour as equal and a crowd, regardless its size, expressing tastes about a colour do not establish objective measurements? In my mind you are asking "how important are physical differences to the taste". – Leonidas Feb 01 '11 at 16:28
  • @Leonidas: that sounds like that's your answer to that one aspect of the question, then. :) But, what about when measuring instruments are not available? Is it a matter of taste, or are physical differences involved? And for that matter, "how important are physical differences to taste" seems like an interesting question in its own right. – mattdm Feb 01 '11 at 16:36
  • this is a great question. There are optical illusions that are carefully constructed manually to exploit how gullible our visual system is. I wonder if photographers have done anything in this area (color/brightness, not perspective illusions). – Ron Feb 07 '11 at 06:48
  • Here's an example of a Gimp plugin which takes advantage of this kind of knowledge! http://docs.gimp.org/en/plug-in-retinex.html – mattdm Feb 24 '11 at 23:17
  • 1
    Here's a great article on many aspects of color theory. It's not specific to photography so I'm posting it as a comment. – Stefan Monov Feb 26 '11 at 22:44
  • "How does the human vision system perceive ..."? -- Is Kanizsa's Triangle: https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Kanizsa_triangle.svg/220px-Kanizsa_triangle.svg.png , Amodal perception: https://en.wikipedia.org/wiki/Amodal_perception Etc. - How the Brain processes the COLOR / SHAPE / Background CONTEXT that the eyes present to it the essence of your question, as opposed to 'How does the eye see, Light perception' as outlined in https://en.wikipedia.org/wiki/Visual_perception ? - See: 4.2.6 of https://books.google.ca/books?id=bs8qBgAAQBAJ&pg=SA4-PA7&lpg=SA4-PA7 . Useful?

    – Rob Sep 19 '17 at 18:24

6 Answers6

26

Land's work (among others) pretty much proved that we can make sense out of just about anything. The human eye is, from an engineering standpoint, a mediocre device at best, but it's backed up by a pretty amazing processing system: the visual cortex. I've known people whose first indication of color vision deficiency was when the nice fellows at the recruiting station told them that they couldn't get into an electronics trade because they couldn't see the "29" on the PIPIC card.

I'll assume that you aren't asking about using a luminance-only sensor (one that doesn't have factory-installed color filtration, like a Bayer matrix or a Foveon sensor), and so aren't too worried about how many exposures with how many filters it would take to make a color photograph.

In the strictly bio-optical sense, all we need to worry about (assuming we have adequate color vision ourselves) is removing our own adaptation biases from the entire workflow. That means reasonably well-calibrated monitors (critical calibration is only necessary when matching off-screen color references, like Pantone swatches or product samples; for most purposes "close enough" really is close enough) and examining output (prints or transparencies) under full-spectrum, daylight-balanced lighting (which will minimize intracranial post-processing -- our eyes evolved to work in daylight). It's a good idea, too, to take a break and revisit a picture with "fresh eyes" from time to time when post-processing -- we can easily fool ourselves into seeing more or less contrast or hue shift than is actually there due to habituation and concentration.


Because our eyes do not feature apochromatic correction, it would be a good idea to avoid hard color transistions (edges) that cause scintillation where possible, like red against blue. Since our eyes can't focus those two colors on the same plane, a two-dimensional representation of something that looks perfectly natural out in the real world (because the red and blue things are at different distances) will cause our autofocus to hunt and introduce luminance artifacts. Uluru (Ayer's Rock) at sunset from the sunlit side on a clear day is beautiful -- almost beyond imagining -- but a picture of it is really hard on the eyes. A few clouds or a less-saturated sky can largely eliminate the scintillation. (The Expressionists exploited this fact deliberately to make skies appear brighter than they could actually be painted. You can use scintillation in the same way, but it's an effect you'd want to avoid most of the time -- particularly when working in-studio.)


Our perception of color also depends on context. That is, we perceive a color differently depending on the colors around it.

That's much more of a problem for an artist trying to paint something realistically than it is a photographer's problem. For instance, if you're trying to paint a still life in a low-key old masters chiaroscuro style, that lemon will never look right until you stop trying to use the bright lemon yellow you think you see and start using a muddy, toned-down yellow ochre. Most of the lemon will be a mid grey-brown barely biased towards yellow, but in the context of the surrounding colors it looks bright yellow.

On the other hand, if you were to paint the same still life, but with a light background and in a high key, getting the lemon to look the same bright yellow would mean using a bright lemon yellow pigment (which is not just brighter, but cooler) for much of the body of the lemon, and shadow and highlight colors would need to be cooler as well. Context changes a lot.

In straight photography, this is a self-resolving problem most of the time. If you get the exposure right, the colors will look right in their actual context. (There may be some issues because our eyes see a wider dynamic range than we can fit into the final color space, but that's not a color perception problem.) You may notice some odd/unexpected colors popping up in your pallette when you spot the image in post, but as long as you're selecting from nearby it's not something you need to give a lot of consideration to.

It's when you want to make wholesale changes that contextual color shift comes into play in a big way. That big block of OMG turn it down kindergarten-blocks orange in your original image becomes a weak, insipid pink or a dark, bloody crimson when you swap out the original background. It's something you'll notice right away. It may be a bit of surprise when you first see it, but it's not a "real" problem -- you're going to adjust the background color or the subject curves until the image looks right to you. (Color spill, where reflected light from the background becomes part of the lighting of the subject, is a separate issue.)

The only time context becomes a real problem is when you need to hit spot color targets for a client (real, or imaginary if you're trying to learn the craft), and that's usually a situation where you either are or should be working with an art director who has at least half a clue, and the problems that arise are often not with the photography, but with the juxtaposition of your photography with other elements on the page/screen. Depending on the scene, you may have to make a choice between making the logo on the product package look right or be right. If it looks right, you might have to arrange things so that it doesn't come too close to the spot-color printed logo (the location of which is often a part of the official corporate look; see the client's communications manual). If the main color of what you record needs to be an actual match to the Pantone process version of the official color (again, see the comms manual), then you may have some restrictions on how you shoot the scene and what else can be included in it. Again, you should be working with an AD (or someone who has the capacity to make decisions on behalf of the client), and you may have to tell them they can't have what they originally wanted due to some real technical limitations -- but you'll be showing them the problem on-screen.


One last thing, mostly for interior/architectural photography: mixed lighting. Our eyes are pretty good at reconciling mixed lighting; cameras are not. There's a reason why you can get sort-of-nd, blue and amber gels (probably mylar or acetate rather than real gels) in large, wide rolls -- they're for covering windows. If you're shooting an exterior but want to show the interior lighting (and it's not dark out yet), you'd cover the interior of the windows with weak blue gels to cool down tungsten or warm-balanced interior lighting a bit (just a bit -- you probably want warm, but not bright orange). Shooting an interior during the daytime, you'd want to use amber on the window exterior if the interior lighting is tungsten or warm-balanced fluorescent. This assumes that you need, for one reason or another, to use the actual lighting in place -- either because it's a feature you want to capture, or because it's the only practical way of lighting the whole space. This is pretty high-end stuff, though; you need a gel budget and a crew.


As diurnal critters, we are also biased towards color temperature. A warm (red/yellow) balance, as naturally occurs at the beginning and end of the day, tends to evoke a somewhat more relaxed attitude, while a cool (blue/green) balance puts us in a more serious mood (as it should if daytime food gathering is a priority). That being said, warmth plus very high contrast means firelight at night, which can either be intimate or spooky. In the natural world, we learned that bright colors mean either "really dangerous" or "really good to eat"; either way, they're meant to draw our attention, and still do. But that's about the end of the physiological and evolutionary stuff.


Most of the other effects of color are culturally and personally biased, and here you are stepping well out of the world of hardware into the world of software. It wouldn't matter a whit if humans had three or thirty-seven different classes of cones for data gathering if, culturally, red still meant "stop" and green still meant "go" and the two together still meant "Christmas" (which, in turn, means something completely different to those for whom Christmas invokes warm family feelings and to those who feel loneliness or cultural isolation at that particular time of the year).

If you are looking for universals, well, the best you can hope for is a sort of regional consensus, and straying too far outside of your own experience is going to be like speaking a foreign language -- you're probably going to miss a lot of the subtleties, nuances and connotations that a native "speaker" of that color culture experiences. Unlike language, though, you're probably not going to run across too many people who are willing to "listen" and try to make sense of what you're trying to say.

Even among people with a common culture, you can't count on common experience. Colors that are strongly evocative to you may be the next best thing to meaningless to the fellow next door, or you may find that your attempt to echo the little red wagon awakens memories of fire trucks, riots and looting among your not-so-suburban audience.

All you can do is say what you mean to say in a way that makes sense to you. Others will see what they see, and you can't really force them to see what you do without the photographic equivalent of explaining the punchline. All art is abstraction; the meaning is up to the viewer. As an artist, you can only ever convey the most superficial meaning directly (what the subject is and what the subject is doing -- the journalistic aspects). Everything else is the audience participation portion of the program, and the audience will bring their own cultural and personal experiences and biases with them.

  • 1
    Thank you! An interesting post overall. Specifically, bits like the reason red and blue cause that reaction next to each other are exactly what I was looking for. I'm sure there's a lot more along those lines. – mattdm Feb 05 '11 at 04:52
  • There really isn't much else; the definitive reference is probably still Itten's The Elements of Color ( http://www.amazon.com/Elements-Color-Treatise-System-Johannes/dp/0471289299 ). I don't know that it's a reference you need to buy and keep around; it's not very heavy reading, and a once-through will give you more than enough to carry with you for life. –  Feb 05 '11 at 06:28
  • I think it's funny that you say "there's not much else besides the red/blue interaction", while Matt Grum says "there's not much else besides the number of green sensors in bayer". – mattdm Feb 07 '11 at 19:45
  • It's not that red/blue is everything, but that red/blue is the worst example of the phenomenon -- all saturated (pure) colors of very different wavelengths will exhibit the phenomenon to one degree or another -- but that understanding that piece of the puzzle's about it in photography. Contextual color shift (the way a color appears to change depending on what's around it, and the other big optical concern; I'll add it to my answer) is more of a painter/designer's problem and really only comes into play in extreme post-processing. Reproducing Itten here, though, is a bit much. –  Feb 07 '11 at 20:13
11

The eye has two types of photoreceptive cells, rods and cones. Rod cells work in low light and are located toward the periphery of the eye and sense form and movement whereas cone cells are densely packed in the centre of the eye and sense colour but require more light. Think cone = colour to help remember which is which.

There are three types of cone cell L, M, S which sense different parts of the spectrum which broadly correspond to yellow (Long wavelengths) green (Medium wavelengths) and blue (Short wavelengths) of light. They are randomly distributed so they are more like colour film than the regular arrangement of colours in a Bayer sensor. Intermediate tones are interpreted by the relative responses of each type of cell in a manner loosely analogous to Bayer demosaicing, except cells are paired up so that a pair of L and M cells records the red/green axis of the incoming light colour and L/M pairs are paired again with S cells to record the blue/yellow axis. Thus we see colour in the Lab* space rather than the RGB space. This makes sense as Lab* was designed to better cover the colour gamut of the human eye which is fingerprint shaped than the triangle shaped RGB spaces.

Due to the closeness of the L and M frequency response curves, and the relative rarity of S cells (only 1 in 20) the eye is more sensitive to green & yellow wavelengths of light, and I have heard that this is why Bayer sensors have twice as many green pixels as they have red or blue.

This would make sense from an evolutionary perspective since if you're hunting & gathering in dense greenery then being able to detect fine graduations of colour would help you find food. Blue is also rare in nature (among flora and fauna) which accounts for the lack of S cells.

I believe the frequency response of each type of cone cell are very similar from individual to individual, however the relative numbers of L and M cells can vary widely from 75 : 20 to 50 : 45 (I had to dive into Wikipedia for this)

The amount of light also drastically influences colour perception in humans. To the naked eye stars mostly appear white, due to the low level of incoming light, whereas in reality they are all different colours depending on the composition / age / velocity of the stars.

Matt Grum
  • 118,892
  • 5
  • 274
  • 436
  • 1
    @Matt Grum: Really? So when I look at Betelgeuse, I'm really not seeing red? How strange -- I sure thought I was! – Jerry Coffin Feb 01 '11 at 16:20
  • 2
    Betelgeuse is particularly bright in the sky -- one of the brightest. That in fact supports what Matt Grum is saying. But even then, honestly, it usually looks pretty much white to me! – mattdm Feb 01 '11 at 16:23
  • 2
    @Jerry Coffin: You would only see red if you look at it pretty much dead-on, since our cones are mostly concentrated in the center of our retina (well, slightly off center, near our blind spot). Stars tend to be more visible when looked at slightly more off-center, however rods aren't sensitive to red wavelengths at all, so what you're seeing when you look at most stars is primarily their luminance, with a very slight amount of color. If we had greater color sensitivity, the very slight tinge of red you see in Betelgeuse would be much more saturated and "colored". – jrista Feb 01 '11 at 17:02
  • 2
    Bearing credence to this, if you look at stars through a telescope, which effectively makes them brighter, they differentiate in colour quite a lot. There are a few striking examples of double stars where one is distinctly blue and the other is distinctly yellow. – CanSpice Feb 01 '11 at 17:06
  • 2
    Ok, so this is a pretty good biological introduction, but how do we, as photographers, make use of this info? Do we boost blues in our photos to compensate for low "S" counts? Is there something we should be doing to take advantage of the extra sensitivity for greens? Maybe there's something about the way our brains process color that accounts for the appeal of B&W photos in some cases. Is anyone aware of any work in this direction? – D. Lambert Feb 02 '11 at 14:37
  • @D. Lambert — exactly! Do you mind if I paraphrase this comment in an edit to the original question? – mattdm Feb 03 '11 at 15:05
  • @mattdm - No problem at all -- go right ahead. I'd love to know if there's any science behind why any given photo is appealing to some people vs. others, or generally appealing to most people, or whatever. I like the focus of this question on color (as opposed to composition, etc.). – D. Lambert Feb 03 '11 at 15:15
  • 1
    @D. Lambert The question was originally "how does the human eye perceive colour", it subsequently changed after I answered it! To be honest it's not all that relevant to photography beyond explaining why there are more green pixels... – Matt Grum Feb 03 '11 at 19:14
  • @matt - Fair enough. Hopefully, the question is more relevant as it's now phrased. – D. Lambert Feb 03 '11 at 21:08
  • @Matt Grum — sorry about that; I had meant to have that more clear in the original question. – mattdm Feb 04 '11 at 15:36
  • I'm still surprised you don't find it relevant. Objects in the world interact with different wavelengths of light in a concrete way, giving them a certain "ideal" color as perceived by (say) "God's eye". Our own eyes, though we take it for granted, record a very small subset of that in a rather idiosyncratic way, and our brain processes that into a perception of color. Cameras record a different subset of that real color, and that recording is then filtered through our human perception again on viewing. It seems like understanding differences in those idiosyncrasies is very useful. – mattdm Feb 04 '11 at 15:37
  • One tidbit I find particularly interesting here is the way yellow is perceived. To me at least yellow has a certain "primary" feel, beyond that of, say, cyan or magenta or purple. Maybe that's due to mixing paints in kindergarten (the RYB subtractive model!), but maybe it's influenced to some degree by its place in our "native" colorspace. – mattdm Feb 05 '11 at 00:15
  • 1
    Here's another technical follow-up question: why do we use RGB for sensors rather than following the L, M, S cells more precisely? Wouldn't that be inherently better? – mattdm Feb 07 '11 at 16:56
  • 1
    @mattdm: Are you implying that sensors have a spike response, only for a single wavelength each? They don't. The dotted lines in this graph represent the sensor response (of a 3-chip camera), the solid ones represent the eye response. They're pretty close. I don't know why they're not equal - probably we haven't perfected the tech yet. – Stefan Monov Feb 26 '11 at 22:40
8

I honestly don't think the physical mechanics of the eye map to making better pictures, unless you are talking about 3D. What's more important is the emotional response to the colors we see. Art has more to teach us about color than science. In short, Color Theory is what we should spend more time on, as that is a more practical discussion of the human perception of color.

We perceive "cool" hues (blues and purples) differently than we perceive "warm" hues (reds and yellows). The quotes around warm and cool have to do with our perceived feeling when we see these hues rather than the purely Kelvin color temperature required to reproduce the hues. The perceptions are ground into us by experience. When the weather is cold outside, the sky is typically gray and we get less direct sunlight. This in turn provides a bluer tint to everything we see. Conversely, when the weather is warm outside and the sun is out, we get more direct sunlight which in turn provides a redder tint to everything. Hence our perception of these hues.

There are a wide range of emotions that are tied to the colors that we see. A short list includes:

  • Bright colors/high contrast: excitement, stimulation, fun
  • Cool/low contrast: moody, depression, despair, reflection, cold
  • No color: introspection, separation, class, sophistication, masculine
  • Pastel/low contrast: good mood, light feelings, care, feminine

This is by no means a comprehensive list, and there are exceptions to these perceptions as well. The colors used in a photograph can play a big part in the emotional impact on the viewer looking at the photograph. Another part of this is the paper used to reproduce the picture:

  • Glossy: provides punch to the colors, adds contrast by reflecting away stray light
  • Matte: reduces contrast by refracting the light across its surface, more subdued
  • Silk/Luster: Provides a balance between the two aforementioned extremes.

When it comes to black and white photography, color theory is equally important as it is our primary tool to control the contrast in the scene. In this discipline, it helps to know about color filtration. In essence, when looking at an RGB color wheel (primary colors of light as opposed to pigment), the color of a filter will block or reduce the color opposite of it on the wheel. Common filters used in traditional black and white photography are:

  • Yellow: blocks blue providing for a more dramatic sky, while leaving green vegetation alone.
  • Red: blocks both blue and green for even more contrast. Also hides blemishes on skin as the reds and lighter skin tones merge (white becomes red and red remains unchanged).
  • Infrared: blocks everything except for low bandwidth reds, needed for infrared photography--produces a black sky, bright clouds, and bright vegetation. Almost no atmospheric affects (haze does not reduce contrast)

Depending on the scene, you also might use something along these lines:

  • Green: lightens vegetation and sky, emphasizes skin blemishes
  • Blue: lightens sky while leaving vegetation alone
mattdm
  • 143,140
  • 52
  • 417
  • 741
Berin Loritsch
  • 1,512
  • 9
  • 14
  • I am absolutely interested in the vision system as a whole here, not merely the physical construction. THanks. – mattdm Feb 04 '11 at 15:29
  • By the way, I would add that color theory is science in addition to being an aspect of art. – mattdm Feb 04 '11 at 15:44
  • 1
    It has application in both realms. Color filtration is definitely science, but emotional perception is art. – Berin Loritsch Feb 04 '11 at 15:48
  • That's not such a solid line either. Emotional perception can be science too — a soft science at least, and there's some hard science in there too. Applying that science in an attempt to evoke/communicate a desired response is an aspect of art. The science isn't necessary to art -- art can also be about constructing a language of color and perception based on intuition, on an external structure of symbolism, or even completely arbitrarily. But for the purposes of this question, I'm specifically interested in ways in which color science can inform art. – mattdm Feb 04 '11 at 18:47
  • 2
    @mattdm, Your comments underscore the fact that photography is equal parts science and art. More so if you do traditional film photography (I love the smell of fixer in the morning), but we are shaping the physical properties of light to artistic use. – Berin Loritsch Feb 04 '11 at 19:39
  • This knowledge makes it easier to achieve that picture I have conceptualized in my mind, knowing how perception bias in the eyes affects what most of us see while actually being in specific places (Beach Sunset, Mountain Morning, etc) allows me to understand why pictures seem so different while editing, post processing, etc. It also makes me understand and explore how am I Actually seeing something, without even taking the picture. In conclusion, every piece of information we get, either scientific or artistic, can be used as an explanation of a complex phenomenon that we use to create art. – Jahaziel Jun 13 '11 at 21:55
4

There are some definite applications. Humans pay attention much more to differences than things that are alike. One way to highlight the subject is to make the subject a different color than everything around them, and that will make the subject pop.

Also, Red is a color which Humans in particular are pre-programmed to pay alot of attention to. A red item will draw attention, and thus can be a powerful photographic tool.

Another interesting point is that humans perceive less color in low light. Cameras are pretty independent, but they do tend to amplify the red somewhat in low light. Thus for an image to be taken at night to appear like a human would see it, it should have it's color saturation reduced somewhat. Humans can see some color in the dark, so full black and white is not required.

Hope some of this information helps!

PearsonArtPhoto
  • 10,941
  • 10
  • 60
  • 93
2

To raise my awareness of perceptual differences, I periodically browse examples at: http://www.michaelbach.de/ot/index.html .

The site's title:

89 Visual Phenomena & Optical Illusions

(Visual Illusion · Optische Täuschung)

by Michael Bach

Inspiritor
  • 21
  • 1
  • 1
    Thanks for this link. I too use optical illusions to learn to know when my eyesight is being deceived. Extremely valuable when doing architectural photography or when having to decide what white-balance to use, when my color-memory doesn't jive with what the camera tells me. Here's another link with interactive illusions so you can adjust their parameters to tell just when your own senses might be fooled. http://lite.bu.edu/vision-flash10/applets/lite/lite/lite.html – Handy Andy Apr 06 '11 at 20:32
1

So, in looking for something completely different, I stumbled across Michael Reichmann's short essay Colour Theory as Applied to Landscape Photography, which turns out to be some of what I was after in asking this question, although it is really too short to be comprehensive. (And more on the artistic side, less on the technical. But that's okay.)

Michael Freeman's book (no longer in print; hopefully it will be released in a better-edited new edition) Mastering Color Digital Photography also has more useful information along the same lines and in more depth. (In many ways, it seems like an expansion of Reichman's short article.)

(I've marked this question "Community Wiki", since I'm not really saying anything of my own here.)

mattdm
  • 143,140
  • 52
  • 417
  • 741