0

I want to buy my forth digital camera, and the final decision is some recent Panasonic Lumix G vs. an Olympus OM-D E-10 (of course the Panasonic is more expensive). As I have had two Panasonic GH models already, I found out that even RAW corrections seem somewhat limited, and I thought it's due to the fact that my models had 10 bits of "color depth" only (8 bits for video). I've had other RAW material, and the impression was that I could get more out of those. Using exiftool I got these values:

Olympus ORF:
Bits Per Sample                 : 16

Olympus JPG: Bits Per Sample : 8

Panasonic JPG: Bits Per Sample : 8

Panasonic RW2: Bits Per Sample : 12

So it seems even the Olympus has much better color resolution than the Panasonic GH-3. Of course I don't have the data for the camera I haven't bought yet (G91 or G9 most likely). Panasonic did not specify.

So basically I wonder: Is there a list of camera models showing their color bit depths for images and videos?

U. Windl
  • 443
  • 3
  • 15
  • 1
    Just because a file format stores values in 16 bits doesn’t mean the sensor resolves 16 bits. – Eric S Jan 07 '22 at 03:26
  • Not all Panasonic RW2 files are equal. For many of their cameras, if you use electronic shutter the actual bit depth at analog-to-digital conversion is only 10-bits, but it's still encoded in a 12-bit or 16-bit scheme. If you use the mechanical shutter, the bit depth is usually 12-bit, though again, it may be encoded in either a 12-bit or 16-bit container. – Michael C Jan 07 '22 at 09:04
  • @EricS Sensors don't resolve any number of bits. They record analog electrical charges. Analog-to-digital conversion of the signal from the sensor is what determines bit depth. But one must also remember that bit-depth does not necessarily equal a specific number of stops of dynamic range or color bit-depth output. One can use very small increments to use 16-bits over a range of one stop. Or one could use very large increments to use 8-bits over a range of more than eight stops. Obviously, if one uses a low bit-depth with a high dynamic range, blocking and banding will be much more noticeable. – Michael C Jan 07 '22 at 09:08
  • Not an answer, but I have the 10.3. It's a great little camera, but if you can stretch to a 5.3 I think you'd be happier. I'm working my way towards a 5.3 to replace my 10.3 – spikey_richie Jan 07 '22 at 10:58
  • 1
    @MichaelC I get your point, but sensors do have noise and that noise limits the effective resolution. Digitizing a noisy signal with more bits doesn't necessarily increase the effective resolution. That is what I was getting at. – Eric S Jan 07 '22 at 16:15
  • 1
    There is something wrong with your data; the OMD-E10, GH-3, and the GH-9 all have 12bit ADC's. Jpegs will always report 8bit, but there are almost no digital still cameras that have 16bit ADC's that I know of (some CCD monochrome astro sensors do). In either case neither the file depth (raw is typically 16bit, jpeg 8bit), nor the ADC's maximum accuracy (typically 12 or 14 bit), has much to do with the bit depth of the image file data; which to a certain extent depends on the scene recorded. I would worry about other things... – Steven Kersting Jan 07 '22 at 18:07
  • @EricS bit-depth has absolutely nothing to do with resolution, Resolution is determined by the number of photosites on the sensor. It is true that noise affects the maximum dynamic range of a sensor's sensitivity, but the number of stops in dynamic range do not have to correspond to the bit-depth any more than Adams' Zone System with 11 zones had to correspond to 10 stops of DR. The whole point of the Zone System was to compress a scene with more stops DR into the 6-7 stops that photo printing paper was capable of displaying. Bit depth is similar. – Michael C Jan 08 '22 at 21:24
  • It only takes one bit to go from pure black to pure white if no intermediate values are needed. Conversely, one can use 16-bits to measure in extremely small steps within a one stop range. DR is the height of the staircase. Bit depth is the size of each step. You can have a 20 feet tall staircase (higher DR) with only a 20 large one foot steps (lower bit depth), or you can have a shorter 5 feet staircase (lower DR) with 240 smaller 1/4" steps (higher bit depth). – Michael C Jan 08 '22 at 21:27
  • @StevenKersting Some medium format digital sensors have true 16-bit ADCs. – Michael C Jan 08 '22 at 21:33
  • @StevenKersting As I said, it's hard to get the actual data, and I'm sure the GH-3 records videos with only 8 bits (per channel) color depth. On the bits: I was hoping to get some details "out from the dark" when shooting RAW. I always had the impression: The cheaper the sensor, the less details you have in the dark. And where there is nothing, you cannot amplify it. – U. Windl Jan 08 '22 at 22:49

1 Answers1

3

DXOMARK lists bit depth for many cameras (use the search feature) along with lots of other interesting sensor information in the measurements section. You can also do a side by side comparison of up to 3 bodies. Note, these are lab tests. The scoring sometimes doesn't translate in to real world shooting experience.

qrk
  • 3,006
  • 4
  • 13
  • DxO isn't really measuring raw bit depth. They're measuring (normalized for display size) color bit depth within the sRGB color space after converting to JPEG, which is by definition limited to 8 bits per channel. The scores can approach 24 bits because there are three color channels, thus 8 x 3 = 24. – Michael C Jan 07 '22 at 08:54
  • @MichaelC, the DXO measurements (e.g. color sensitivity) are taken from raw images ("screen" data). Only the "print" values are for a normalized jpeg file. https://www.dxomark.com/dxomark-camera-sensor-testing-protocol-and-scores/ – Steven Kersting Jan 07 '22 at 17:18
  • Raw image data has no color. Each photosite outputs a single monochromatic brightness value for all the light that made it down the well. Color filter arrays do NOT have hard cutoff, there's a lot of overlap, just as there is with our retinal cones. In fact, without that overlap there's no way to create color. Light (nor X-ray or UV, etc.) itself has no intrinsic color. Color is a product of the system perceiving electromagnetic radiation. Screens showing "raw" images are showing a processed, jpeg-like version that has been demosaiced, color balanced, gamma corrected, etc. – Michael C Jan 08 '22 at 21:09
  • The fact that the colors of Bayer mask filters are not the same colors as the RGB channels of our output devices makes DxO's claims about using unprocessed raw image data to measure "color sensitivty" dubious at best. They're not true representations of how sensitive the sensor is to 8+8+8 bits in the R+G+B channels when they're measuring monochromatic tonal values measured behind three differently colored filters, none of which are exactly green or blue and particularly one that isn't even remotely "red". – Michael C Jan 08 '22 at 21:18
  • @MichaelC RAW "images" may not have color, but they have a signal (i.e. light) to noise ratio, and obviously using more bits than necessary will just output random extra bits, while using too few bits will "round" the highest resolution to some "stairs" (as you named them). I hope we can agree that it's possible to measure the "random bits" ("white noise") in a RAW "image", and thus "correct" the nominal number of bits to the significant number of bits. – U. Windl Jan 08 '22 at 22:57
  • @U.Windl The two things aren't directly connected, though. Signal to noise ratio exists in the analog signal created by the photosites. When it is digitized both the signal and noise are digitized using the same size of gradations. Using more bits outputs everything with the same smaller gradations. You'll have more bits of noise, but you'll also have more bits of signal, and they'll be in the same ratio no matter how many bits you use (unless you use a small enough number of bits that the noise floor is lower than the first step - Using fewer bits will then lower S/N ratio). – Michael C Jan 09 '22 at 00:11
  • 1
    12 stops of signal-to-noise ratio does not have to be encoded using 12-bits. You can set the white point at one stop higher than the black point and use every single value in 16-bits to finely measure the gradations between the two. Or you can set the black point 14 stops lower than the white point and only use 8-bits to encode that difference. Both cases would be ridiculously extreme. But there's no rule that says you must encode signals using the same number of bits as the number of stops between the highest analog voltage possible (full well value) and the noise floor. – Michael C Jan 09 '22 at 00:18
  • That's why analog amplification before ADC works. If none of the photosites are at more than 1/4 full well value, you can raise the amplification by a factor of four (two stops) so that at ADC the highest analog charge (1/4 FWV) measured by the sensor is encoded with the highest digital value for every how many bits you're using. That makes the gradations between each digitized value represent smaller gradations of analog charges that were recorded by each sensel. If you don't amplify the analog signal, then all of your digitized values are in the bottom quadrant of discrete values. – Michael C Jan 09 '22 at 00:28
  • Then when you multiply those discrete digital values by four, you're still only using one-fourth of your available discrete values, because every new value is a multiple of four. There are no 1, 2, 3, 5, 6, 7, 9, 10, 11, etc. values anywhere in your file. They're all 4, 8, 12, 16, 20, etc. Then when you start applying curves things start breaking apart faster and you get banding. – Michael C Jan 09 '22 at 00:33
  • This link in the answer does not actually contain a "list": Instead it lists individual camera reviews, so it's quite hard to get an overview. Also some articles like https://petapixel.com/2018/09/19/8-12-14-vs-16-bit-depth-what-do-you-really-need/ talk about the final image. As RAW is highly non-linear, there may be actual advantage in having more than 10 bits. – U. Windl Nov 06 '23 at 14:32
  • According to Charles Poynton's Frequently Asked Questions about Gamma in "How many bits do I need to smoothly shade from black to white?" more than 13 bits would be needed for linear representation, while 9 bits would do for non-linear. However when you want to make adjustments, you would want some extra bits to avoid "tearing". – U. Windl Nov 06 '23 at 14:33