0

I am currently planning to calibrate my light meter to a camera by using Sekonics data transfer software. The software also measures/calculates/estimates the dynamic range of the camera, by analyzing a set of over- and underexposed images made of the test chart. It got me wondering how this differs to testing the dynamic range using Xyla charts?

And how the software is even able to estimate this dynamic range from an 8-bit image still, of a cameras recording at 12-bit log, with an advertised dynamic range of around 14 stops.

Here are some screenshots from the Software's manual

enter image description here

enter image description here

vannira
  • 343
  • 10

1 Answers1

0

A Xyla chart determines DR with a single image, bracketing multiple images (at fixed/base ISO) attempts to overcome display limitations similar to HDR bracketing.

An 8 bit jpeg with sRGB and gamma 2.2 encoding can record/display ~ 12 stops of DR. And a wider color space (Adobe/ProPhoto) with gamma 2.2 encoding can display over 17 stops of DR (8x2.2).

If you have a sensor capable of 14 stops (at base ISO) and were using sRGB jpegs to measure it that would require two images; one recording 2 stops more light than the other... because the recording/display limitation is discarding 2 stops.

Steven Kersting
  • 17,087
  • 1
  • 11
  • 33
  • "An 8 bit jpeg with sRGB and gamma 2.2 encoding can record/display ~ 12 stops of DR." - by what definition of DR? – Euri Pinhollow Feb 15 '24 at 16:18
  • @EuriPinhollow, stops of light, min/max. It is accomplished by compressing the bits in between the full stops as recorded. I.e. there is a lot of data in the last stop of DR that is indistinguishable to a human. That's what a 2.2 gamma curve does... it extends the displayed dynamic range by a factor of 2.2 (as limited by the color space). – Steven Kersting Feb 15 '24 at 16:37
  • Or you could say it compresses the recorded DR data (bits) into the 8 bit limitation without discarding/clipping the extremes... – Steven Kersting Feb 15 '24 at 16:44
  • I see, so its like using multiple small buckets to measure the volume of a bigger bucket? However, I am still evidently a bit confused though, with an exhibition of a, lets say 14-bit HDR image, on an SDR 8-bit display. The 14-bit to my understanding it what allows the image to be displaying the high dynamic range, but when I put this image now in 8-bit, how does the image differ? Does the HDR image clip at the highlights (if the HDR is exceeding the 8-bit gamma encoding), or does it retain the highlights just not faithfully, in a kind of washed out image? – vannira Feb 15 '24 at 20:54
  • 1
    Dynamic range is just the difference between min/max; or recorded black and white... even if that is 14 stops (bits) apart in raw capture, it only requires 2 bits to display it within the capability of the display device (black/white). HDR is actually typically 32 bit floating point, then tone mapped to display on an 8 bit RGB display. If it is not tome mapped (gammas curve) then it would be quite flat. In linear raw sensor capture it requires 1 bit per stop recorded, but once converted to purely digital data there is a lot of extra data that can be left out of that 14 bit capture. – Steven Kersting Feb 15 '24 at 21:46
  • 1
  • 1
    Also note that the file can contain/show more DR than the display is capable of... in that case the display clips what it cannot duplicate. – Steven Kersting Feb 17 '24 at 15:41