13

What exactly limits modern digital camera sensors in capturing light intensity beyond certain point?

Gill Bates
  • 171
  • 2
  • 14

3 Answers3

20

What exactly limits modern digital camera sensors in capturing light intensity beyond certain point?

In terms of the physical properties of the sensor itself:

The number of photon strikes and the number of free electrons resulting from such photon strikes until there are no more available electrons with the potential to be freed within each photosite (a/k/a sensel, pixel well, etc.) define its full well capacity. It's not much different from film, in which full saturation is reached when there are no remaining silver halide crystals in the emulsion that don't already have enough 'sensitivity specks' to be transformed into atomic silver by developer. The main difference is the shape of the response curves when each technology approaches full capacity. Digital results in the same number of electrons per photon¹ being released until full well capacity is reached. As film nears full saturation, more and more light energy (or development time) is needed to affect the remaining silver salts.

In terms of recording the analog voltages as digital data:

When the analog voltage from each photosite (a/k/a 'sensel', 'pixel well', etc.) is read from the sensor, amplification is applied to the signal. The camera's ISO setting determines how much amplification is applied. For each stop increase of ISO, twice as much amplification is applied. If the camera's "base" sensitivity (for simplicity's sake, let's call ISO 100 an amplification of 1.00X in which input voltage equals output voltage) is used, then photosites that reached full well capacity should result in a maximum voltage reading on the post amplification analog circuit feeding the ADC. If ISO 200 (2.0X amplification) is used, the voltage from any sensel that reached one-half (1/2) full well capacity or more is amplified to the maximum voltage allowed on the post amplification circuit. ISO 400 (4X amplification) results in any sensel that reached one-quarter (1/4) full well capacity or more being recorded at the maximum value, and so on.

Any amplification greater than 1.0X will apply a "ceiling" lower than the full well capacity of each photosite. When high amplification is used, signals weaker than full well capacity also reach the maximum voltage capacity of the circuits downstream from the amplifier. Any pre-amplified signal level that is strong enough to "peg the meter" after amplification is indistinguishable from any other pre-amplified signal level that will also "peg the meter."

When these amplified analog signals are converted to digital data by the analog-to-digital convertor (ADC), signals at the maximum voltage capacity of the circuit are assigned the maximum value allowed by the bit depth of the analog-to-digital conversion. If converted to 8-bit values, voltages are assigned a value in binary between 0-255. The maximum signal allowed by the analog circuit feeding the ADC would be recorded as 255. If 14-bit, voltages are assigned a value between 0-16,383 with the maximum value assigned a binary value of 16,383, and so on.

The takeaway for when you are actually taking pictures:

You'll get the most difference and the finest number of gradations between the brightest and darkest² elements in the scene you are photographing when the amplification is at the camera's "base" sensitivity and the shutter time and aperture are combined to give the brightest elements in the scene just enough exposure to be at or near full saturation. Using a higher ISO value is useful if it is not possible to expose for that long or with a wide enough aperture to approach full saturation of the highlights in the scene for the image you wish to make. But using a higher ISO comes at a price. The total dynamic range is reduced by the higher amplification of the electrical signals coming off the sensor.

So why don't we always shoot at ISO 100, or whatever the camera's base ISO is, and then push the exposure later in post? Because doing it that way tends to amplify "noise" in the image even more than shooting at higher ISO values does. How much more depends on how much and where noise reduction is done to the signal. But reducing the influence of noise by applying noise reduction to the analog voltages coming off the sensor also comes with a price - very dim point sources of light are often filtered out as "noise". That's why some cameras with very good low light/high ISO performance, in terms of noise reduction, are also known as "star eaters" by astrophotographers.

¹ There is a slight variation in the energy contained in a photon based on the frequency at which it is oscillating. Photons oscillating at lower frequencies release slightly less energy when striking the sensel than photons oscillating at higher frequencies. But for photons oscillating at a specific frequency/wavelength, the amount of energy released when striking the bottom of a pixel well is the same until full well capacity is reached.

² We call the difference between the darkest and brightest elements that can be recorded by a sensor (or film) the dynamic range of the recording medium. For each stop of increase in sensitivity (ISO) with a digital camera, the linear voltage difference between "zero" and "full saturation" is halved. When converted to logarithmic scales, such as 'Ev', doubling the sensitivity results in a reduction of one 'stop' of dynamic range (all else being equal, which it rarely ever is).

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • 7
    and note that, paradoxically, digital clipping appears in a medium which is more analog in nature. film can only produce binary image (a given particle is either activated and instantly saturated or does not react at all), while each of the digital camera pixels recognizes a (quasi-analog) range of the light intensity. yet the power of statistics and the huge number of individual binary cells in a film makes it behave more "analog" than digital sensors. – szulat Oct 29 '18 at 10:31
  • 2
    I would add to this answer an explanation of the pixel data being (analog) amplified as the lines of pixels are shifted off the sensor; if the amp level is set too high (misjudged ISO setting) then the ADC inputs will be saturated. I believe ADC is done at higher bit depth to mitigate this issue, but there are still limits where data will be lost. As the RGB channels are done separately, this will also mean loss of colour information (whiteout). – Phil H Oct 29 '18 at 16:23
  • @Phil Good point about ADC and the limits placed by amplification. HRGB channels are not done separately during ADC, though. At that point everything is monochrome: one single value per sensel. – Michael C Oct 29 '18 at 17:00
  • @szulat Well, kind of, but not exactly. The 'sensitivity specks' formed on the surface of individual crystals by the impact of photons do not convert the entire crystal until the film is brought into contact with developer. How long the developer is allowed to react with the emulsion determines how many 'sensitivity specks' (a/k/a how many 'trapped electrons') a crystal needs before it is converted to atomic silver. Until then, the crystal is still mostly silver halide. That's why we call an undeveloped negative a 'latent' image. – Michael C Oct 29 '18 at 17:37
  • 1
    Wow! Complex topic nicely and simply explained. Well done! – FreeMan Oct 29 '18 at 18:18
  • @PhilH, "as the lines of pixels are shifted off the sensor" - what does it mean? – Gill Bates Oct 30 '18 at 19:21
  • 1
    @GillBates That's an imprecise way of saying "as the sensor is read out pixel by pixel." – Michael C Oct 31 '18 at 03:34
  • My apologies, the shifting is a precise term specific to CCD sensors, which I hadn't appreciated is incorrect for CMOS sensors in modern cameras. The general point about amplification dependent on ISO levels is still true, only on a pixel level rather than a sensor level. @MichaelClark: CCDs use a shift register for the pixel charges, and shift off one column at a time before then shifting that column row-by-row through the ADC. That's what I was referring to. – Phil H Oct 31 '18 at 12:05
  • @PhilH CCD sensors do not shift pixels anywhere. They stay right on the sensor. They shift the resulting data, in the form of voltages. off the sensor in a sequential manner, as do CMOS sensors. The main difference is that CCD sensors can start and stop the time when a sensor is "active" (i.e. collecting photons and converting them to voltages) globally, whereas CMOS sensors can only turn each photosite on/off sequentially. – Michael C Oct 31 '18 at 19:46
  • Sorry, perhaps I wasn't precise enough - the charge (electrons) in each photosite is what gets shifted off. This is why CCDs produce line artefacts; a given photosite is so over-exposed that the charge floods the other photosites. The charge is not converted to a voltage until it is eventually shifted through an amp before the ADC. This is the root of the 'charge-coupled' part of the name. By contrast each CMOS pixel has its own amplifier (source follower transistor). – Phil H Nov 01 '18 at 11:50
  • @PhilH Please explain the difference between a charge and a voltage. (They're the same thing - electrical energy in the form of electrons). – Michael C Nov 01 '18 at 13:15
  • @MichaelClark: Photocells are capacitive; they store charge. You can measure the amount of charge stored in a capacitor by either detecting the potential difference (voltage) across the cell, or by providing a route to ground which discharges the capacitor through a resistor, producing a voltage decay curve as the charge drains. The voltage is the effect, the charge is the cause. – Phil H Nov 01 '18 at 15:34
2

Adding to Michael Clark's excellent answer (describing full-well capacity clipping and ADC clipping), there are several other points in a digital photography pipeline where clipping can occur:

  • For non-RAW images, during on-device color correction/automatic gamma adjustment prior to compression, and during the compression itself.

    When you compress an image as JPEG or MPEG, the hardware truncates the bit depth to whatever the compressed medium supports, which is typically much less than the hardware bit depth. Because of that truncation, values near both brightness extremes get lost.

    Prior to compression, your camera applies color correction and gamma adjustments that can affect the effective dynamic range that fits within the limited bit depth provided by the compressor. For example, when recording video in Canon Log mode, the darkest and lightest parts of the scene are mathematically pulled towards the center so that the effective dynamic range increases significantly, and fewer parts of the image will be clipped on either end of the range.

  • During post-processing. When performing post-processing that significantly alters the brightness of an image, it is possible for early stages of the computation to actually cause values to exceed the range that can be correctly represented by the number of bits used to hold them. While rare, this sometimes does occur, and when it does, it can cause clipping even in areas of the photo that are not actually clipped in the original image.

  • During color gamut correction while printing or displaying the image. When performing color correction, sometimes you can get values that fall outside the gamut that can be accurately reproduced by the output medium. At that point, the color engine has to decide what to do with those out-of-gamut values. This also effectively results in clipping, though it visually looks somewhat different from what most people think of when they talk about clipping, usually resulting in things looking the wrong color.

dgatwood
  • 2,235
  • 11
  • 15
1

The easy empirical explanation:

Look at a very bright bulb, if the light is bright enough you won’t be able to see the inside of the bulb because your pupils can close more and there is still too much light striking your retina, saturating it and the information that reach your brain is clipped (you only see bright light but not the details within the light). That is one of the reason why if you try, you should not do it, to look at directly at a clear sky midday sun you won’t be able to see the sun but an intense light (Beware that trying to do it without the proper protection can actually permanently harm your eyes or your photographic equipment, lenses and sensor)

Any sensor behaves the same way (from your camera or otherwise). Once the signal (in this case light) is too high for its capacity (reaches the saturation level), it will clip any additional information, it’s unable to discern more signal, passing on just a flat high signal without any valuable information in it.

abetancort
  • 438
  • 2
  • 7