6

I’m trying to wrap my head around the relation between exposure, dynamic range, stops of light, and middle gray. I’m going to ask several questions on that topic. Some of them can be stupid or incorrectly posed. Please bear with me, I am really lost and substantially lacking some important knowledge. And I have no idea where to start.

  • How many stops of light are there between the black point, RGB(0, 0, 0), and the white point, RGB(255, 255, 255)? Is that the same as the camera’s dynamic range as measured by DxOMark?
  • How many stops are there between the middle gray, which is RGB(119, 119, 119), and the white point?
  • Does the distance in stops between the middle gray and the white point depend on my camera model?
  • How can I measure the actual distance in stops between the middle gray and the white point at home?
  • How can I calculate the theoretical distance between the middle gray and the white point based on a camera’s specs and someone else’s measurements?
  • In general, how to calculate the distance in stops between an RGB(n,n,n) gray and an RGB(m,m,m) gray?
  • How to add or subtract a specific number of stops to an RGB(n,n,n) gray without Lightroom?
  • Where can I learn all of this on my own? Any book or online course recommendations?

I’m usually shooting using Adobe RGB and then convert to sRGB for web. Does the answer to any of the above depend on the target color space of the photo?

mattdm
  • 143,140
  • 52
  • 417
  • 741
Till Ulen
  • 379
  • 4
  • 15
  • This article is also valuable: http://www.cambridgeincolour.com/tutorials/dynamic-range.htm – Till Ulen Apr 15 '15 at 18:14
  • 1
    Not only this varies by camera (otherwise they would all have the same DR), it also varies by setting (Contrast, Tone, etc). So I'm going to say impossible to answer. – Itai Apr 15 '15 at 22:04
  • @PhilipKendall I was shy because it's a partial answer... But you're probably right. – Fumidu Apr 16 '15 at 07:43
  • @Fumidu Don't be shy :-) A partial answer is better than no answer. – Philip Kendall Apr 16 '15 at 07:46

3 Answers3

9

Intro
Based on your questions, I get the impression that you miss one important point, and that is the difference between:

  • light perception in the real world,
  • light perception in the world as humans perceive it,
  • light percetion as your camera's sensor records it,
  • light perception as image formats and your computer perceives (or processes) it.

The real world has a huge amount of stops between black-point and white-point. Distant stars emit only a few photons per second at us while the sun blasts about 10^17 photos per second at us. That's about 57 stops(!). Humans eyes can see around 10 to 14 stops of dynamic range at any moment (source) and around 24 stops when we have the time to adjust our eyes (source). The sensors of DSLR's are just below that (8-11 stops). Smaller sensors often have a lower dynamic range. Digital image processing at 8 bits has exactly 8 stops of dynamic range.

Trying to answer your questions
I'll try to answer your questions as good as I can. My objective is to give you insight rather than just giving you a straight answer, because I think that best fits the intent of your question(s).

  • How many stops of light are there between the black point, RGB(0, 0, 0), and the white point, RGB(255, 255, 255)? Is that the same as the camera’s dynamic range as measured by DxOMark?

There are 8 stops between RGB 0 and RGB 255 if your gamma is 1. For example if I use Photoshop to brighten a RGB (119, 119, 119) color using the Exposure function to RGB 255, I need to add +2,42 stops. But I need to underexpose -11,48 before I get to RGB (0,0,0). If you have the Info panel open and your color picker at the patch of color while sliding the exposure meter, you'll see that the RGB values change faster when adding exposure and slower when sliding the exposure down. As mentioned in @Fumidu's answer, the is because of the default gamma value of 2,2.

  • How many stops are there between the middle gray, which is RGB(119, 119, 119), and the white point?

As you are talking about RGB values, you are in the computer processing world. Stops are translations from the real world (twice as many light) to digital images. Bottom line, this depends on how your computer (and your imaging software) handles "exposure". In other words: this depends on the gamma. My experiment in Photoshop resulted in +2,42 stops. But that's how Photoshop handles gamma and exposure. Based on the idea the one stop is twice as many lights and if you assume a gamma of 1 (double light means double RGB values), it's ( ln(255)-ln(119) ) / ln(2) = 1,1 stops (rounded to 2 digits). You can just multiply by the gamma, if that's not 1. Based on gamma 2.2, it's 2,2 * ( ln(255)-ln(119) ) / ln(2) = 2,42 stops, which matches my experimental outcome in Photoshop.

  • Does the distance in stops between the middle gray and the white point depend on my camera model?

Yes. The depends on two things:

  • The dynamic range of your camera
  • The way your camera handles ISO in relation to dynamic range

If your dynamic range is 10, you have 5 stops below mid gray and 5 stops above. But based on the ISO value, you camera might give some more priority to the shadows and offset the mid gray, so for example at ISO 800, you have 6 stops below mid gray and 4 stops above it (to capture more shadow detail at the expense of the risk of highlight clipping). Here is an article explaining this for a video camera, but digital photo camera's do exactly the same.

  • How can I measure the actual distance in stops between the middle gray and the white point at home?

Setup your camera on a tripod or steady surface. Put a piece of white paper in front of the camera. Make sure the piece of paper is evenly lit and that the light source is constant and preferably quite white (or adjust your white balance). Put your camera in manual mode, set the ISO fixed on 100 ISO, put the aperture at some reasonable value (5.6 or 8 would be great) and start taking shots at different shutter speeds. Measure the pixel brightness (RGB value) and note how many stops there are between (almost) black exposures (<10 RGB value) and (almost) bright exposures (>250 RGB value). There you have the dynamic range of your camera.

This article explains it in a bit more detail.

  • How can I calculate the theoretical distance between the middle gray and the white point based on a camera’s specs and someone else’s measurements?

As a rule of thumb: 5 or 6 stops would be a good guess for DSLR's (4 or 5 for compact camera's). If you know the dynamic range, it's half the dynamic range. Subtract one stop for high ISO (800-6400) and 2 stops for extremely high ISO (6400 and up).

The problem is that dynamic range is often not part of the specifications of the camera. Also the way the camera handles high(er) ISO ratings is part of the processing magic of a camera and often not publicly available. Long story short: an general educated guess comes quite close. Calculating it is (because of unavailable specifications) practically impossible.

  • In general, how to calculate the distance in stops between an RGB(n,n,n) gray and an RGB(m,m,m) gray?

stops = gamma * ( ln(n) - ln(m) ) / ln(2)
ln is the natural logarithm; but if you prefer, you can use log too, which will give you the same results.
So from 119 to 255, it's n = 119, m 255, gamma = 2.2, stops = 2,42.

  • How to add or subtract a specific number of stops to an RGB(n,n,n) gray without Lightroom?

Using the above formula, you can use any software or programming tool to do this. Not sure what you're looking for.

  • Where can I learn all of this on my own? Any book or online course recommendations?

This is highly personal, but a few of my favorites are:

agtoever
  • 857
  • 4
  • 11
  • Thanks a lot! That gives most of the info I needed. By “theoretical distance” I meant, how to predict the outcome of the home experiment from the previous question on paper, without actually performing the experiment. – Till Ulen Apr 16 '15 at 10:52
  • Ah. I understand. I edited the answer accordingly. – agtoever Apr 16 '15 at 11:03
1

I assume the gray card is 18%. Then compared to a maximum reflectance of 100%, the linear stops are each half, or in steps of 100%, 50%, 25%, 12.5%, 6.25%, etc. So 18% would be around 2.5 stops down.

But it will NOT look like that in your histogram, because all RGB data in camera histograms is gamma encoded, which is a different story. In a gamma histogram, one stop down is closer to 3/4 scale than to 1/2 scale (73%, but it will vary a bit with camera corrections, like white balance and contrast, etc).

On a 0..1 normalized scale, 18% with gamma would be (0.18 ^ 1/2.2) = 0.46. And 46% x 255 = 117, a bit less than 1/2 scale. Not linear there, but still about 2.5 stops down.

People tend to not realize histograms are gamma encoded, instead of an idealized linear scale. But we never see a linear scale, all of our RGB data is gamma encoded, and the histogram shows it.

WayneF
  • 12,879
  • 1
  • 16
  • 30
1

One stop is a factor 2 of light (-1 stop => half the light, +1 stop => twice the light). So a byte (8 bits) has a dynamic range of 8 stops. It's less than a good camera, which can have up to 13 or 14 stops of dynamic range.

So how do we deal with this problem? It is impossible to put 13 bits of a raw file into the 8 bits of a jpeg file without losing some information. Gamma compression is used to keep the most relevant pieces of information relatively to how the human eye works.

en.wikipedia.org/wiki/Gamma_correction

WayneF's answer provide a good example of calculation for the middle gray.

Also, you have to understand that a camera sensor responds linearly to light, but the eye responds non linearly, and that's an other reason why gamma compression is used.

Fumidu
  • 417
  • 2
  • 7
  • 2
    Raw binary number is an abstract quantity, it has no relation to photographic "stops", 8-bit number can convey 1 stop, 8 stops, or 80 stops, depending on the interpretation. – szulat Apr 16 '15 at 12:25
  • You're right, it depends on many things, and mostly on Gamma value. However, 8-bit can not store more than 8 "orders of magnitude" of anything without losing some information. How should we call that notion of "orders of magnitude"? – Fumidu Apr 16 '15 at 13:00
  • 1
    Saying that 8-bit number stores 8 stops without losing information is meaningless, too, because we would like to know at what resolution (see: color banding) is the given number of stops saved (there is no "natural" resolution, and you can always increase or decrease it). And actually, the "natural" interpretation of our raw 8-bit number has infinite number of stops, not 8 stops, because we have 0-255 and any number above zero is infinitely more bright than zero brightness... of course the color resolution varies at lot ;-) – szulat Apr 16 '15 at 20:36
  • Gamma is NOT about the human eye, and gamma does NOT increase the dynamic range. The gamma data is ALWAYS decoded before any human eye sees it. Gamma is/was done to correct CRT monitors, which losses decode it automatically, no gamma encoding left over. LCD monitors are liner and don't need gamma, but all RGB data is already encoded, so LCD monitors simply decode it before showing it. – WayneF Apr 18 '15 at 03:35
  • And gamma is an exponential function, normalized to 0..1, and 0 to any exponent is 0, and 1 to any exponent is 1, so the end points CANNOT change, so dynamic range CANNOT increase. Plus, it is decoded back into 8 bits before we see it anyway. – WayneF Apr 18 '15 at 03:45
  • First line of Wikipedia - Gamma_Correction - Explanation : "Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color." It is also true that it is needed to correct CRT monitors, but it is also definitively linked to the human eye. – Fumidu Apr 20 '15 at 09:18
  • Sorry, Wikipedia is obviously wrong on that point. A few do say that, but it's always wrong. The human eye absolutely NEVER sees any gamma data. The CRT losses decode it, the only purpose being so that the eye sees the original image levels before the the CRT losses. LCD monitors are linear, and don't need gamma, but for image compatibility, the LCD chip just decodes and discards the gamma. They must do this, since all RGB images always were gamma encoded for CRT, and it MUST be eliminated before the eye sees it. If the eye were to see gamma data, the low values would be intensely too bright. – WayneF Apr 22 '15 at 22:22
  • Ok. Do you have any sources about the human eye sensitivity to light? Most links talk about color sensitivity, but not intensity. The only link I found is this one : http://www.telescope-optics.net/eye_intensity_response.htm and it has these equations: S=2.3klog10I+C and S=kI^a with S for Sensation, and I Intensity of light. Both equations are approximations of the rather complex response of the eye. What do you think? – Fumidu Apr 23 '15 at 08:01
  • And also : http://www.cambridgeincolour.com/tutorials/gamma-correction.htm It's a very good explanation, better than wikipedia's. – Fumidu Apr 23 '15 at 12:27
  • And it is wrong too. Too many that don't know explaining anyway. The eye does have a inverse response similar to gamma, but that is purely coincidental. The eye NEVER can see gamma data. The brain handles the eye, and the CRT handles the gamma data. Gamma is NOT involved when the eye sees the original scene, and gamma just corrects the CRT losses to reproduce it again the same original way (no gamma). The eye expects linear data, and the eye NEVER sees gamma data. Gamma is done for the CRT, NOT for the eye. Think about that more, it is obvious. – WayneF Apr 25 '15 at 14:15
  • 1
    I think we both agree on that. It is obvious that the eye does not see gamma data and always expect linear information. BUT if you only have 8bits, it's better to encode the data with gamma. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is less sensitive) – Fumidu Apr 27 '15 at 08:10
  • @Fumidu : Gamma is Not important to 8 bits, because monitors are hopefully going to decode it to same original linear values before our eye sees it. Cheaper LCD monitors decode it into 6 bits of original linear values, and then show it. CRT decoding is analog, but still unimportant (other than correcting CRT losses), because the only plan is for the eye to hopefully see decoded linear original data. If the monitor output does not match the original linear scene, it is a reproduction error. – WayneF Apr 29 '15 at 17:14
  • Pesky short text limits. :) Gamma cannot increase dynamic range either, because both ends (normalized 0..1) to any exponent is same 0 and 1 (gamma cannot cause clipping).

    Gamma is done only to correct CRT losses, pure and simple, true in 1940 too. But still maintained today as a no-op, only for data compatibility now.

    – WayneF Apr 29 '15 at 17:14
  • 1
    Please, re-read this page : http://www.cambridgeincolour.com/tutorials/gamma-correction.htm especially the part with the images of banding on linear gradient. Indeed, Gamma does not increase dynamic range. But it allows to store the linear information of the sensor in a way that suits better the eye : Gamma = less banding in dark areas. At first, you're right, gamma was necessary to correct the CRT. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. – Fumidu May 04 '15 at 14:30