5

Color digital cameras are typically implemented by putting a color filter array (CFA) like a Bayer filter, plus an infrared cut filter, in front of a sensor that is sensitive to light frequencies spanning the full visible spectrum plus some range to either side of it.

The filters have two degradative effects:

  1. They exclude light from reaching the sensor. (E.g., a "green" sensor pixel may only receive photons that are within the range 500-570nm. Most others are rejected.)

  2. Resolution is lost to "mosaic" effects. (E.g., a green image component is only seen by half of the pixels in a Bayer filter.)

How are these losses quantified, and what is their typical magnitude in practice?

feetwet
  • 3,603
  • 2
  • 26
  • 60

4 Answers4

4

The idea that any particular wavelength is only allowed to pass through one particular color of the three colors used in a Bayer masked filter has been perpetuated to death. Fortunately, it is false.

Here's a typical enough spectral response curve of a specific camera sensor.
Sony IMX249 absolute QE
The visible (to humans) spectrum ranges from 390 to 700 nanometers. Notice that the "green" pixels respond, to one degree or another, to the entire range of visible light. That response is greatest between about 500 and 570 nanometers, but it is by no means zero at other wavelengths. The same is true of the "red" and "blue" filters. Each allows some light from the entire visible spectrum to pass. What differentiates them is in just how much of the light of a particular wavelength is allowed to pass through and how much is reflected or absorbed.

There are Bayer masked CMOS sensors in current DSLRs that have quantum efficiencies approaching 60%. That should be enough to eliminate the fallacy that only 1/3 of visible light that falls on a Bayer masked sensor is allowed to pass the filter and be measured by the pixel wells. If that were indeed fact then the highest quantum efficiency of a Bayer masked filter would be limited to 33%.

Note that the human response to visible light is similar. The cones in our retinas also overlap significantly in their spectral response.
human spectral response

What we perceive as colors are the differences in the way our brains process the varying response of of our blue, green, and red cones to different wavelengths and combinations of wavelengths.

In theory the infrared cut filter doesn't reduce any light visible to human vision because none of the light it prevents from reaching the sensor is visible to human eyes. Infrared, by definition, begins just outside the range of visible light at 700 nanometers and extends wavelengths of 1,000,000 nanometers (1 mm). Digital sensors are typically sensitive to IR light from between 700 and 1,000 nanometers. In practice sometimes the near-infrared wavelengths just under 700 nanometers are attenuated slightly by the IR-cut filters.

So just how bad are the "degradative effects" identified in the question?

They exclude light from reaching the sensor. (E.g., a "green" sensor pixel may only receive photons that are within the range 500-570nm. Most others are rejected.)

As covered above, the best current CMOS sensors in DSLRs and other cameras have quantum efficiencies in the visible spectrum ranging from between 50-60%. In one sense you could say they lose roughly half the light that falls on them, or one photographic stop. But that's not a whole lot different than the human retina so the argument could be made that they don't lose much of anything compared to what we see with our eyes.

Resolution is lost to "mosaic" effects. (E.g., a green image component is only seen by half of the pixels in a Bayer filter.)

Again, all three colors in a typical Bayer array are sensitive to at least some of the "green" wavelengths between 500-570 nanometers. This overlap is leveraged when the monochromatic luminance values from each pixel well are demosaiced to create R, G, and B values for each pixel on the sensor. It turns out that in terms of the ability to resolve alternating black and white lines a Bayer masked sensor has absolute resolution that is about 1/√2 of a non-masked monochromatic sensor of the same pixel pitch.

Also, don't assume that that "white light" has equal amounts of energy across the entire visible spectrum. It does not. Anytime we are talking about camera sensor efficiency, this is often the elephant in the room that all too often ignored. If the light we are interested in capturing with our cameras is stronger in the green range, then a sensor that is more responsive to "green" wavelengths will be more efficient for our purposes than one that has a flatter response to energy distributed evenly across the entire visible spectrum!

Sunlight filtered by Earth's atmosphere, and the artificial light sources we create to mimic that light, has much more intensity in the mid-range (green) wavelengths than in the shorter and longer wavelengths on either side of the visible spectrum. Our retinas evolved to be most sensitive to the most energetic portion of "white light" and our cameras have sensors that mimic that.

enter image description here

Please also see this answer to Why is the sky never green? It can be blue or orange, and green is in between! which includes this graphic:

enter image description here

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • 1
    The first graph is not insightful because, really, only a fraction of colour filters would have the QE of a channel. Divide blue, red graphs by 4 and green graph by 2, sum them and you will see that resulting QE graph (which is roughly equal to what you get if you use monochrome conversion in post) is magnitude smaller than monochrome QE. it does show that Bayer CFAs are not loosing much more than what they are expected to by design though. – Euri Pinhollow Mar 02 '17 at 08:00
  • The QEs quoted are measured with the Bayer mask in place. – Michael C Mar 02 '17 at 09:19
  • How was monochrome QE obtained then? – Euri Pinhollow Mar 02 '17 at 09:33
  • I haven't quoted anything regarding monochrome QE. I referenced monochrome resolution limits. – Michael C Mar 02 '17 at 09:43
  • 1
    The quoted claim is disproved in this long comment: https://drive.google.com/file/d/0By7viOLQaKydMVUwTjVDT0JBSXM/view?usp=sharing – Euri Pinhollow Mar 02 '17 at 12:10
  • So TL;DR: In practice Bayer CFAs cut light by roughly 50% and resolution by about 30%. – feetwet Mar 02 '17 at 14:20
  • @feetwet 50% only if one assumes the QE of a non-masked sensor would be 100%. – Michael C Mar 02 '17 at 14:22
  • Did you not mean to say that a Bayer CFA in effect rejects 50% of visible light? If not, the question would be, "How much visible light is rejected by a typical CFA?" – feetwet Mar 02 '17 at 14:27
  • 2
    @feetwet practically Bayer CFA does not cut resolution of monochrome objects given that they are not too saturated. 50% of QE cut is optimistic, it may be well less than that. – Euri Pinhollow Mar 02 '17 at 18:44
  • Unfortunately the QE of 60% is a peak value, the average is much lower. If you look at a wavelength of 650 nm, the red channel passes 40%, the green channel 9% and the blue channel only 3%. That means that in total only 15% of the light with that wavelength is registered by the sensor, taking in to account that there are 2 green pixels for every blue or red pixel. – Orbit Jan 25 '20 at 12:52
  • The reason there are two green pixels for each blue/red pair is that the human retina is similar. This is due to evolving in a world where the dominant light is much stronger in the "green" range of wavelengths closer to the center of the visible spectrum than in the extremes of the visible spectrum. When shooting under anything resembling sunlight/natural light (which most man-made artificial light sources are), our sensors are designed to be more sensitive to the ranges of wavelengths that are more plentiful. – Michael C Jan 25 '20 at 18:18
  • As you can see from the relatively old Sony IMX249 shown above, the peak monochrome efficiency is around 70% at 505µm and stays above 60% from 420-575µm Between about 400µm and somewhere just past 700µm, an average efficiency of 60% is maintained. That's almost the entire visible spectrum... – Michael C Jan 25 '20 at 18:23
  • So the comparative color QE needs to be compared against that 60%, not an assumed 100%. – Michael C Jan 25 '20 at 18:27
  • @MichaelC: The monochrome efficiency is measured without bayer filter, therefore it is very high. With the filter it is dramatically lower. The total efficiency can be calculated for every wavelength as shown above. At 650 the monochrome efficiency is 45%, yet the total efficiency is only 15%, as shown above. – Orbit Jan 28 '20 at 21:00
  • @Orbit You're spot selecting one of the most inefficient places over the entire visible spectrum for that sensor and then arguing that is normative. It is not. At 540µm, where our own eyes are most efficient, the difference between monochrome at 67% and green at 64% is negligible. Even assuming R and B are 0% (not the case), since half the sensels are filtered for green the entire sensor would still be 32% efficient at 540µm. See how that works? – Michael C Jan 29 '20 at 00:19
  • @MichaelC: I never said that the 15% is normative, it was just an example. With a peak around 35% and also large regions where it is close to 15%, the average should be around 25%. That is nowhere near the 50 to 70% that you are claiming. So the conclusion is that a Bayer filter does cut out close to 2/3 of the light. – Orbit Jan 29 '20 at 22:37
  • No it doesn't. If the same sensor (from your own claims) when measured without a Bayer mask is 60% efficient, and it is 30% efficient with the Bayer mask, then the Bayer mask is only cutting 50% of the light hitting it. If it were cutting 2/3 of the light and what light gets through the Bayer mask was striking a sensor with 60% QE without a Bayer mask, then only 60% of the 33% that makes it through the Bayer mask would be measured, for a system QE of 20%. – Michael C Jan 29 '20 at 23:27
  • So now saying some color sensors are "approaching 60%" somehow actually means we're claiming 70%? – Michael C Jan 29 '20 at 23:29
  • @MichaelC: The Bayer filter does let around 50% pass at some wavelengths, but only 25% at others, that's why the total efficiency comes close to 1/3, although it is slightly higher than that because it has 2 green filters for every red or blue one. The 20% in last your sentence is quite accurate.That is the situation at 470 nm. There the monochrome efficiency is 68%, the filter efficiency is 32% and the total efficiency is 20%. That is a region where the eyes are quite sensitive for blue light, but red and green also start to become active. – Orbit Jan 30 '20 at 20:44
  • @Orbit You're still assuming that "white light" has equal amounts of energy across the entire visible spectrum. It does not. Sunlight filtered by Earth's atmosphere, and the artificial light sources we create to mimic that light, has much more intensity in the mi- range (green) wavelengths than in the shorter and longer wavelengths on either side of the visible spectrum. Our retinas evolved to be most sensitive to the most energetic portion of "white light" and our cameras have sensors that mimic that. – Michael C Jun 11 '20 at 01:16
  • @MichaelC: Do you have have any source for this? I find that it is pretty evenly distributed around noon. Around sunset it goes more towards red, as we all know. But even if it is so, we still need blue and red to represent all colors. – Orbit Jun 11 '20 at 18:15
  • @Orbit Please see here and here – Michael C Jun 16 '20 at 12:13
  • @MichaelC: The difference does not seem much in these plots. It does not seem likely that it makes a very big difference. – Orbit Jun 16 '20 at 21:20
1

There's no "lost" resolution. A manufacturer may engage in "specmanship" by advertising X-megapixels, but the resolution is defined by the pixel size, fill factor (what percent of the pixel surface is light-sensitive), and the number of pixels per color group. Further, there are well-developed algorithms for 'retrieving' resolution within an RGBG group based on a posteriori analysis of the RGBG group's, and its neighboring pixels', signals.

As to filter thruput: the spectral transmission curves for common camera RGB (and RGBY for some esoteric designs) are readily available on the web. Use them with care, since the signal loss for a given photopic (retina + brain) color, typically caused by several different incoming photon wavelengths, can vary considerably from one color to another. However, camera manufacturers are well aware of this, so both the RAW lookup tables and the internal JPG converters perform a color-rebalancing algorithm to compensate.

Carl Witthoft
  • 1,887
  • 12
  • 11
  • Regarding the resolution: I think what you're saying is that advertised resolution for a color sensor is largely tied to the number of Bayer groups. I.e., it is more than 1 pixel per Bayer group, but it's definitely not 4 pixels per Bayer group. But note that with no Bayer filter you really do get 4 pixels where before you had 1 Bayer group. – feetwet Mar 01 '17 at 17:12
  • "but the resolution is defined by ... the number of pixels per color group" - I think that it is reasonable to at least mention that resolution can be both chromatic and luminous. I.e. Xtrans cameras have bigger share of green pixels and they have better luminous resolution but worse chromatic resolution. – Euri Pinhollow Mar 02 '17 at 12:26
  • @EuriPinhollow You make a good point, but I'd suggest not using "resolution" to describe chromatic accuracy. That's nonstandard usage. – Carl Witthoft Mar 02 '17 at 12:27
  • @CarlWitthoft I did not say anything about chromatic accuracy (Luther Ives conditions are completely different story). White, red, green, blue objects will be rendered with different details depending on CFA. Xtrans will give somewhat better resolution for white and green objects than bayer sensor but will resolve red and blue objects somewhat worse. This is what I mean when I say that Xtrans have worse chromatic resolution. Foveon X3 sensor will not favour any colour at all (in well lit conditions, aggressive colour transformation is not considered). – Euri Pinhollow Mar 02 '17 at 12:33
1

Summarizing from comments here, an upper bound on light loss due to an RGB filter is indeed a factor of 3, or 1.6 stops. In reality the response of each color filter element has some spectral overlap, so it's not quite that severe. Matt Grum estimates a factor of 2.5, or 1.3 stops.

feetwet
  • 3,603
  • 2
  • 26
  • 60
  • 1
    https://drive.google.com/file/d/0By7viOLQaKydMVUwTjVDT0JBSXM/view?usp=sharing here is specific calculation which uses Clark's graph for Sony sensor. – Euri Pinhollow Mar 02 '17 at 12:11
  • @EuriPinhollow - That's a good explanation of how to get the actual efficiency if one has access to the response (or "quantum efficiency") curves. – feetwet Mar 02 '17 at 14:14
  • That's only the "actual" efficiency if one assumes that "white light" has equal energy across the entire visible spectrum. It does not. It has the characteristics of sunlight filtered by the Earth's atmosphere which is significantly stronger in the middle of the range, where both our retinas and the sensors we designed to mimic them are most sensitive, than on the extremes. – Michael C Jun 11 '20 at 01:21
0

Loss of resolution:

Physically, a factor of 4 is lost on the resolution because 4 pixels are needed to fully represent the color of a certain point. In practice the loss of resolution is negligible. This is because color filter arrays have twice as many green fields as blue or red and the eyes more sensitive to green, especially when it comes to resolution and because camera manufacturers use very sophisticated de-mosaicing algorithms that can recreate much of the lost resolution. unfortunately this does sometimes cause undesired artifacts, as is described in this article:
https://www.dpreview.com/articles/3560214217/resolution-aliasing-and-light-loss-why-we-love-bryce-bayers-baby-anyway

light loss:

At least about 50% of the light is lost due to a color filter array.

To show this, it is interesting to look at the Quantum Efficiency of the Sony IMX249 image sensor.

Quantum Efficiency of Sony IMX249

This sensor is particularly interesting because it is sold both as a color and as a monochrome sensor for which the manufacturer has plotted the quantum efficiency in the same graph. This makes it possible to compare the efficiencies of both and calculate the difference.

To calculate the loss of efficiency, measurement points need to be extracted from the graph with the online program WebPlotDigitizer. This gives the following result:

Quantum Efficiency of Sony IMX249

The measurement points from this graph have been entered in Excel for further processing. The efficiency of the red, green and blue channel of the filter can now easily be calculated by dividing the total efficiency of the channel by the monochrome efficiency, for every wavelength. This gives the following result:

enter image description here

We know know the efficiency of every channel at every wavelength. We also know that the sensor has 2 green, 1 red and 1 blue filter for every 4 pixels. So we can now calculate the total efficiency of the sensor by taking the average of 2 times the green, 1 times the red and 1 times the blue efficiency. This gives the following result:

enter image description here

This shows that the quantum efficiency of this filter peaks at about 57% at a wavelength of about 585 nm. The average is much lower though. If the average is taken between 410 and 700 nm, an efficiency of 40.4% is found. So a little more than half the light is lost due to the filter. The graph also shows that the efficiency is particularly bad for the blue channel, to compensate for this additional gain is often applied to the blue channel. That is why noise is much more pronounced for this channel: Why is the blue channel the noisiest?

For fun we can also calculate the total efficiency of the sensor with the filter. For this the same procedure was used as for the filter alone, this gives the following result:

enter image description here

This shows that the total efficiency of the sensors peaks at 37% for a wavelength of 505 nm. (exact values found in excel table). The average efficiency between 410 and 700 nm is found to be 23.8%.

It should be noted that there is some apparent gain. The quantum efficiency only tells us what percentage of the photons is registered by the sensor, not how it is perceived by the viewer. Our eyes are not equally sensitive to all colors:

enter image description here
Source: http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colcon.html

Take for instance the blue channel at 500 nm. At that wavelength, the blue channel only has a total efficiency of 30 %. If a photon with a wavelength of 500 nm is registered by the blue channel, the sensor will not register that the wavelength was 500 nm. So this photon will be emitted by a monitor at 445 nm. At 445 nm the blue sensitive cones in the eyes are much more sensitive than at 500 nm. So even though 70% of the light with a wavelength of 500 nm gets lost, the 30% that does go through has a much higher chance of getting registered by the eyes than the original light. This gives some apparent gain. It is however not nearly enough to compensate the light lost. It is generally accepted that about 50% of the light is lost due to a color filter array. There are many sources for this:

"the Bayer layer is responsible for 50-70% light loss: https://books.google.dk/books?id=xmknCgAAQBAJ&pg=PA53&lpg=PA53&dq=light+loss+due+to+color+filter+array&source=bl&ots=JM3C0_s_UH&sig=ACfU3U1qie_zYkgaI2LcB9LsOkvnG_08dA&hl=en&sa=X&ved=2ahUKEwjvy9fngYfqAhVFiIsKHZL8AHwQ6AEwAnoECAYQAQ#v=onepage&q=light%20loss%20due%20to%20color%20filter%20array&f=false

"the Bayer design throws away around 1EV of light": https://www.dpreview.com/articles/3560214217/resolution-aliasing-and-light-loss-why-we-love-bryce-bayers-baby-anyway

"The sensitivity of a color sensor is therefore at least 2-3x lower than that of a monochrome sensor": (pop up at bottom left) http://sciencestatic.aws.aaas.org.s3.amazonaws.com/publishing/posters/zeiss/zeiss.html

Orbit
  • 1,530
  • 11
  • 24