1

It is commonly stated that the spectral response of RGB colour imaging sensors (CIS) is chosen to match that of the human eye. While this is roughly true, when you examine this claim in detail it is only true to a very limited degree.

The human eye spectral response has significant overlap of red and green cones:- enter image description here

Typical colour imaging sensor spectral response looks as follows:- enter image description here

There are enormous differences between the CIS and the eye spectral responses. The red 50% point is ~515nm for the eye and 580nm for the CIS. The green 50% point is ~505nm for the eye and ~475nm for the CIS. The Blue 50% point is 475nm for the eye and 510nm for the CIS.

Why is there such large discrepancy between the two? Surely these sensors will require a large amount of colour crosstalk correction in the sensor electronics?

TopCat
  • 221
  • 2
  • 4
  • 1
    It is commonly stated that the spectral response of RGB colour imaging sensors (CIS) is chosen to match that of the human eye. Sources would be helpful because this seems more a strawman than part of an informed technical dialog. – Bob Macaroni McStevens Jan 05 '21 at 19:43
  • I'm not sure where you got your "human" numbers, but the Short wavelength, Medium wavelength, and Long wavelength cones are most sensitive at 420nm, 534nm, and 564nm, respectively. I'm not sure your sensor examples are typical for many dedicated cameras, either. Even so, 590nm is a yellow-orange color, not red which is about 640nm. Many Sony sensors used in large sensor dedicated cameras have peak sensitivities around 460nm, 540nm, and 590nm, respectively. – Michael C Jan 05 '21 at 19:47
  • 2
    Matching the eye is not important, our brains are what we ultimately see with and they have some pretty sophisticated signal processing. – whatsisname Jan 05 '21 at 19:48
  • Calling human cones 'red', 'green', and 'blue' is anachronistic and dates to a time before we could more precisely determine the peak sensitivities of each. – Michael C Jan 05 '21 at 19:49
  • @MichaelC Calling human cones 'red', 'green', and 'blue' is anachronistic... let's be charitable, and just call them reddish, greenish, and blueish. Same (intended) difference. – scottbb Jan 05 '21 at 19:55
  • The SML/RGB cone response is from https://en.wikipedia.org/wiki/Spectral_sensitivity. – TopCat Jan 05 '21 at 19:56
  • Re: sources, this question states (unchallenged): "A digital sensor in a camera works by having filters in front of its pixels, and usually there are three types of filter. These are chosen with response curves as close as possible to figure (a) above, to mimic what the human eye sees." https://photo.stackexchange.com/questions/83923/why-dont-cameras-offer-more-than-3-colour-channels-or-do-they?rq=1 – TopCat Jan 05 '21 at 20:07
  • @TopCat Yes, and those numbers are fairly close to those in my comment. But the numbers you cite for sensors are not for the peak response, which is more important than the shapes of the shoulders. – Michael C Jan 05 '21 at 20:11
  • 1
    @scottbb I'm not intentionally being uncharitable. It's just that I think the whole idea that our Long wavelength cones are most sensitive to "red", when they are not, and one-fourth of the color filters on our camera sensors are "red", when they are not, is what leads to a LOT of the confusion surrounding how our camera sensors mimic our retinas but our emissive color reproduction systems, which actually do use Red, Green, and Blue, do not. – Michael C Jan 05 '21 at 20:23
  • @MichaelC fair enough. And I'm sorry, I didn't mean to imply you were being uncharitable. Poor wording on my part. I get your point, and you're right. But notionally, especially in order to understand the basics of color perception and reproduction, R/G/B is all about approximation. Kind of like, Newtonian physics and Gallilean transformations get us through high school level physics, even though the more complete explanation requires relativity and Lorentz transformations. Newtonian physics is a good enough explanation most of the time. Granted, in the scope of this question, ... – scottbb Jan 05 '21 at 20:38
  • ... which is questioning the limits of the basic explanation, ... well, yeah. Ok, fair enough. – scottbb Jan 05 '21 at 20:38
  • 1
    I think the assumption that the "red" cones in our retinas = the "red" filters of our cameras' Bayer masks = the "red" subpixels used by our emissive displays is a source of GREAT confusion that there is no "crosstalk" between color channels in our eyes, camera sensors, and monitors. The fact is, there is crosstalk between channels in all three instances, even before compensation for varying light sources illuminating objects comes into play. Without "crosstalk", we would not perceive "colors" (which are purely a product of our perception - not a property of "light") at all! – Michael C Jan 06 '21 at 08:16

4 Answers4

6

This is a remarkably complex and interesting subject and no answer will be comprehensive.

The TL;DR result is that if the final picture looks acceptably like what you perceive, then it works.


Your eye spectra diagram seems a little compressed. You don't say where it came from. Here's one from Eye and Colors: Eye Spectrum

The color filters used for sensors will never match human spectra sensitivity. Not even humans match other humans. The question becomes whether or not you can manufacture and produce color filters that can be acceptably processed to produce a result that works. On top of the mismatch in spectra of the filters mentioned, sensors also use twice as many green sensors as a way to bump up green sensitivity to more closely match typical human color perception.

Could the color filters more closely match human spectra? Probably, but at what cost and benefit when a little software already handles the corrections?

On top of camera processing you also have the question of presentation.

Various display devices have their own spectra that also don't match the human eye. For example, the Samsung Quantum Dot vs Conventional TV: Samsung Quantum Dot vs Conventional TV

This too requires its own processing to produce results that are desired or acceptable.

For even more esoterica, you may want to delve into All the Colors We Cannot See, and Red-Green & Blue-Yellow: The Stunning Colors You Can't See

scottbb
  • 32,685
  • 12
  • 104
  • 188
user10216038
  • 2,241
  • 8
  • 10
  • @ user10216038 -- A tip of the hat from Alan Marcus – Alan Marcus Jan 05 '21 at 19:27
  • 2
  • 1 ... buuuut The TL;DR result is that if the final picture looks acceptably like what you perceive, then it works. - this is an artistic photography site, and in the pursuit of art, anything goes. HDR doesn't look like what I see. Pushing the white balance to Tungsten while shooting outside doesn't look like what I see. But both are perfectly valid photos, even though they captured something that I cannot see. Same with infrared photography. In fact, perfect perceptual reproduction is very often not the goal of an artistic photo.
  • – OnBreak. Jan 05 '21 at 23:04