9

enter image description here

Taking a photo of a small red LED causes the resultant photo to have the top of the LED to be white than red. I'm using a Canon S-120 I've tried many different setting with no luck. My friend who is not into photography using his iPhone got great results.

scottbb
  • 32,685
  • 12
  • 104
  • 188
Kopter6
  • 91
  • 1
  • 3

6 Answers6

20

The iPhone is handling this better because of its automatic HDR.

The LED is white (really, a very, very light pink, to my eye) because it's so bright your camera's sensor overloads and "bleeds" into adjoining green and blue pixels (or because the small percentage of red that gets through the filters on the blue and green elements is still enough to nearly max them out -- another possibility pointed out in comments is that the LED output isn't of high spectral purity -- that is, it looks red but still has some blue and green wavelengths; multiplied by a large overexposure and pixel clipping, this becomes whiter in proportion to the level of overexposure). An iPhone (and many other current smart phones with top-line cameras) automatically takes multiple exposures and combines them to preserve detail across a wider brightness range than the sensor can record in a single image.

If you were to do the same (bracket exposure and perform HDR mixing) you'd get similar results with any controllable camera, regardless of the mechanism making the LED appear whiter than it should.

Zeiss Ikon
  • 7,141
  • 13
  • 37
  • 1
    I'm going to ask my friend if his HDR was turned on when his photo turned out correctly. Bear in mind both my photo and his photo were taking a photo for the full front of the electronic face panel. His iPhone and my Canon S-120 were both on Auto exp. mode. – Kopter6 May 14 '22 at 13:09
  • 3
    HDR is helpful in some situations such as this, but not totally necessary. To make a colored light source look the color it is perceived by the human eye, it just needs to be properly exposed instead of blown out. Any light source of whatever color will look white if it is overexposed enough. To get the surrounding, much dimmer area to show is what HDR is useful to do. – Michael C May 14 '22 at 20:20
  • Re, "very, very light pink, to my eye" Your eye, and your monitor, and whatever color calibration you are using. It looks absolutely white on my un-calibrated setup, but when I checked it out in GIMP, it turns out you are right. The level of the green channel is just a fraction of a percentage point less than the levels of the red and blue channels in the "blown out" part of the picture. – Solomon Slow May 15 '22 at 22:52
  • 1
    Can you explain the "bleeding into adjoining pixels" effect you mention? Is it something different from the other answers, which say that blue and green sensors have some sensitivity to red light? – user1079505 May 16 '22 at 08:42
  • @user1079505 It's more or less saying the same thing. Old sensors would electronically leak charge; newer ones (apparently) don't do that, but imperfect filtering does this just as well. Edited. – Zeiss Ikon May 16 '22 at 11:48
  • @ZeissIkon the result is similar, but the mechanism described is different. Do you know which effects might be present or dominating in equipment from the recent 10 years? – user1079505 May 16 '22 at 15:50
  • 2
    @user1079505 In a perfect world the LED would be putting out 255,0,0. Overexpose it 20x, the camera would see 5100,0,0, this would clip to 255,0,0 and it would still look red. However, in the real world it's not perfect. Suppose it's putting out 255,10,10. Overexpose 20x, you get 5100,200,200. Clip to 255,200,200. See how the color changed radically? (Yeah, this isn't a perfect model by any means, but I'm simply illustrating how blown pixels change color.) – Loren Pechtel May 16 '22 at 17:26
  • @LorenPechtel Yes, this is clear. The question is why it happens. Not in an imaginary world, but in the one we live. Zeiss Ikon gives an answer that is significantly different from what the others wrote. – user1079505 May 16 '22 at 18:30
  • @user1079505 I'm not a digital expert; this is my best understanding of the two mechanisms in camera that could yield similar results. Impure LED output is a third possibility; some LEDs are very monochromatic; others are not. – Zeiss Ikon May 17 '22 at 11:04
12

There are a few factors here.

  • Camera sensors generally have a much narrower dynamic range than the human eye.
  • The camera will generally expose based on the overall brightness of the scene.
  • The color filter elements on the camera sensor are not hard stops, the "green" and "blue" filters will still let some red light through.

The result of the first two factors is when you have small bright light sources in a scene they will typically be overexposed. The result of the third factor is that overexposed items in a scene will typically look white.

So what can you do about it, you potentially have a few options.

  • Increase the ambient light level, so that the light sources are relatively less bright and the auto-exposure on your camera reduces the exposure of the light sources.
  • Reduce the exposure on your camera (smaller apeture, shorter shutter time or lower ISO), this will make the rest of the scene darker, which may or may not be acceptable).
  • Use high dynamic range imaging techniques where multiple exposures are taken at different exposure settings and then combined.
Peter Green
  • 833
  • 5
  • 13
7

When a colored light source is all white in a photo it means the light source is overexposed to the point all three color channels are fully saturated.

Even if the light is "pure" red, some of the red light will make it through the blue and green filters of the Bayer mask in front of the sensor. There's no hard cutoff between the differently colored filters. This isn't a flaw, it's intentional. These color filters emulate the cones in the human retina, which also have a lot of overlapping sensitivity between short-wavelength, medium-wavelength, and long-wavelength cones. The overlapping sensitivity is, in fact, necessary in order for our brains to create a perception of color.

If a light is red then the red-filtered photosites (a/k/a sensels or 'pixel wells') on the sensor will have receive several times as much light striking them as the blue-filtered and green-filtered photosites will, but they'll all get some of that red light. When the red light source is properly exposed the amount of light recorded by the red-filtered sensels will be several orders of magnitude higher than the amount of light recorded by the blue and green filtered sensels. This means it is possible to fully saturate the red-filtered sensels without saturating the blue and green ones. But if you expose too brightly until the blue and green filtered sensels are also saturated then the red will be no more fully saturated because the red will be exposed several times over the maximum amount of light it can measure. There's no way to tell how far past full saturation a sensel has been exposed. It could 100.001% (1.001X) of full capacity, 1,000% (10X), 10,000% (100X), or even 100,000% (1,000X) full saturation, and the sensel will record the same amount in each case: 100% (1.0X) of full capacity.

It's like trying to tell the difference between 2 inches, 10 inches, and 100 inches of rain using a rain gauge that can only hold 1 inch before it overflows. You have no way of knowing how much more than 1 inch of rain fell.

In order for an image to contain usable information (which is what we would call a properly exposed photo), some of the "rain gauges" (i.e. sensels) have to be fuller than others. If they're all totally full then there is no difference between any part of the image and there is no usable information.

To reduce exposure you have several choices:

  • Reduce the camera's sensitivity by setting it to the lowest ISO setting.
  • Reduce the amount of light entering the camera by using a narrower aperture setting (higher f-number).
  • Reduce the amount of time the light is entering the camera by using a shorter shutter time. Each time you halve the amount of time the shutter is open you halve the amount of light that strikes the sensor. If an image is pure white it usually means you need to halve the shutter time at least three times (i.e. three "stops"). For example, instead of 2 minutes (120 seconds), try 15 seconds. Or instead of 1/30 second try 1/250 second.
  • Reduce the amount of light striking the front of the lens by using a neutral density filter. They are available in various strengths from one stop (half the light) to ten stops (1/1024 the light). Avoid cheap "variable density" filters if possible. They cause a lot of image quality problems and color shifts.
Michael C
  • 175,039
  • 10
  • 209
  • 561
  • I wonder if cameras could benefit from interleaving some smaller pixels that would collect less light than most of the pixels in the grid, and using those to make inferences about the colors of over-saturated regions. If a pixel is reading (100%,100%,15%) but a pixel that's 1/10 as sensitive reads (30%,10%,1%) that would suggest that the pixel might be better represented as something closer to (100%,75%,50%) [try to keep hues accurate, but reduce saturation as overexposure increases]. – supercat May 14 '22 at 20:00
  • Smaller "pixels" (sensors do not have pixels, image files do) would have lower full well capacity and reach full charge with less light, not more. Just as a smaller bucket fills up with less rain than a larger one does. – Michael C May 14 '22 at 20:17
  • 1
    I think one could make smaller cells less sensitive by adding some partially-opaque material over them. The purpose of making the cells small would be to avoid unduly reducing the amount of sensor area available for normal light-gathering conditions. The output from smaller cells would be noisier than the output from large cells, but since they would only be used in excess-light conditions that shouldn't be a problem. – supercat May 14 '22 at 20:32
  • The hit to overall sensor efficiency, even if it only took up less than, say, 10% of total sensor area would be devastating and would drastically outweigh any benefit for those who can properly expose an image. 10% would wipe out the gains of sensor efficiency over the last decade plus several times over. – Michael C May 15 '22 at 17:24
  • Have things really improved not improved by substantially more than a quarter F-stop over the last decade? – supercat May 15 '22 at 20:48
  • I would think that dynamic range is often a bigger problem than overall sensitivity, and trading off a 10% reduction in overall sensitivity for a 2-3 f-stop increase in dynamic range would for many (though not all) purposes be a good trade-off. Further, if sensitivity were the goal, I would think a matrix that combined red, green, blue, and unfiltered pixels would be better than an RGB matrix. In an RGB matrix, about 2/3 of the light will be discarded, but adding unfiltered pixels would reduce that to about 1/2, while allowing good dynamic range (if unfiltered cells saturate, use RGB). – supercat May 15 '22 at 20:56
  • @supercat Pretty much. Most of the improvements are in the area of processing that can more intelligently identify and remove noise. – Michael C May 16 '22 at 04:09
  • @supercat Nowhere near 2/3 of light is discarded by a Bayer mask. The blue-violet, lime green, and yellow-orange filters are not that strong and allow plenty of overlap between them, just as our retinal cones do. That overlap is the only thing that allows a perception of "color" by our vision system and reproduction of color by our cameras. The "red" filter (there's no actual Red filter on a Bayer mask) does not block all light below about 590nm from passing through it, it only blocks some of it. Ditto for the other two filter colors. – Michael C May 16 '22 at 04:14
  • DR is important, but the way most camera enthusiasts use the term (shadow recovery capability) makes it much more important for those who don't know how to properly expose for the camera they're using. – Michael C May 16 '22 at 04:16
3

The blue and green pixels of the sensor of your camera are still capturing some light from the LED. That's normal. Color is recovered from the difference between RGB levels, but if the picture is overexposed where is the LED that difference may be very little or none, so it looks more white.

Try control exposure with your camera, and take a darker photo. The LED should look more reddish, but the surroundings more dark.

vsis
  • 1,251
  • 4
  • 18
  • Some kinds of objects may show up on some cameras with a magenta, cyan, or yellow hue as a result of overexposure, even if their actual color is closer to red, green, or blue. For example,if an object where red, green, and blue in proportions 80:19:1 were overexposed by 5:1, the RGB values would be roughly 100%:100%,5%. – supercat May 14 '22 at 19:51
  • Many debayer algorithms force pixels to white even if only some of the channels are saturated, to avoid weird looking colors in overexposed areas. – jpa May 15 '22 at 15:09
  • @jpa: Maybe that explains why I remember often seeing color effects on my old digital camera that I don't recall having seen lately. I'd thought it was simply that I hadn't been shooting in the kinds of lighting conditions that led to saturated (mainly) yellows, but maybe the camera algorithms have changed as well. – supercat May 15 '22 at 20:53
0

Its all about exposure. Your phone is compensating for the background instead of the the led. The focal point is the background. thus its overexposing. The led is dimmer than the background. Try shooting it in controlled lighting and turn down your exposure. This should help.

0

The most convenient expedient in this case is likely to work with the raw file and set the "Highlight reconstruction" method to "color propagation". That will make the best of this particular blown highlight problem.

user102869
  • 136
  • 2