41

I am bit confused. If my DSLR is capturing 14 bit image while shooting RAW. Don't I need a 14 bit monitor also to take full advantage of capturing in RAW? Whats the point of capturing an image in 14 bit and open and edit it only 8 bit depth monitor?

mattdm
  • 143,140
  • 52
  • 417
  • 741
user1187405
  • 519
  • 1
  • 4
  • 5

5 Answers5

54

You could edit your photos with an old burned CRT black and white monitor and it still is the same matter: the additional bits count.

Here is a simulation of a 14 bits histogram (A) and an 8-bit one (B). Both are over a blue grid that simulates an 8-bit display or 8bit file format.

In B, all the lines coincide. (8-bit format is good enough because is close of what our eyes can perceive in different gray levels)


Now. Imagine that you need to move your histogram because you want a brighter happy picture.

The different levels on the left side, slide to the right.

On your raw file there are enough "sub-levels" to fill the same blue lines. (C).

But the data on the 8-bit image starts forming "gaps" (red zone). This will create banding problems, increased noise etc.

enter image description here

So the important difference is when you manipulate or control your image, and you do have additional data. This gives you freedom.

Rafael
  • 24,760
  • 1
  • 43
  • 81
  • 11
    +1 nice example, but one should not take it literally - in reality the 14 bit raw is linear while the 8 bit output is not (because of gamma). still a good way to visualize what image processing can do to the histogram! – szulat Dec 22 '15 at 21:18
  • 7
    Yes, very literally. Gamma is actually THE major problem that requires 12 or 14 bits. Gamma is essentially the largest such tonal shift, and in early days, it was done poorly and inadequately in 8 bits. So image creator devices (scanners and then cameras, which have to do gamma) had to improve to 10 bits, then 12, and now 14 bits... all the bits we can afford hardware for, at least until recently. It's true of course that our eye Never sees gamma data (except in the histogram graph). Continued.. – WayneF Dec 22 '15 at 21:59
  • 1
    ... Our view is always necessarily decoded back to original linear for our eye to view, but it needs to have been encoded right. The reason our raw editor data retains the 12-16 bits into the PC is because gamma will still be necessary then. 8 bits is plenty for many/most things, but gamma needs more. Early TV was analog and so gamma worked OK, but the first 8 bit digital tries were disasters. Miserable low level performance. – WayneF Dec 22 '15 at 21:59
  • @WayneF gamma reflects our human senses nonlinearity, it is the solution, not a problem! – szulat Dec 22 '15 at 22:14
  • @szulat Think that nonsense out better. Our eye NEVER SEES gamma data. Gamma correction is done for the CRT nonlinearity. Then CRT decodes the gamma simply with the CRT losses of displaying it. LED monitors are linear, and don't need gamma (so today, they have a chip to simply decode and discard gamma). Our eye ABSOLUTELY MUST SEE the same linear scene that the original lens saw, BEFORE gamma encoding. Anything else would be data corruption of the worst degree. Gamma is retained today simply for compatibility with all the worlds image data (and probably a few CRT are still out there too). – WayneF Dec 22 '15 at 22:32
  • 5
    @WayneF this is common misconception. gamma is equally beneficial now in digital era like it was back in analog CRT days. the display has to present the same levels as the original, true! but our perception is nonlinear. that's why you can encode the brightness as 8-bit using gamma and get the result similar to encoding it linearly with 11-12 bits. more bits means more memory, more bandwidth, more power - wasted without visible effects. that's why gamma is here to say. see also the example gradients here: http://www.cambridgeincolour.com/tutorials/gamma-correction.htm – szulat Dec 22 '15 at 22:50
  • 2
    Correct. Gamma still has a place in digital imaging and video because it makes good use of code values. On the low end of the brightness range, 8-bit with gamma is equivalent to 10-bit linear (because the gamma slope there is near 4). In film workflows, log encodings are more common than gamma encodings, but for exactly the same reason: economy of code values. – Dithermaster Dec 22 '15 at 23:33
  • Cambridge is crap about gamma, imagining the eye response sees gamma. Even Poynton says gamma is for the CRT. But he does add if gamma were not necessary for the CRT, we would still need it for perceptual reasons. I do understand Weber-Fechner (human perceptions) and the 1% step business, etc. Our eyes do require the original linear view, so we necessarily only see decoded data, and any numerical effect of gamma is gone after the inverse is decoded. Any exception can only assume a LUT of 10 or 12 bits, but how likely is that in a consumer 6 bit LCD monitor? – WayneF Dec 23 '15 at 00:55
  • 3
    Short version: Edits to a digital photo are applied mathematically, and your display's bit-depth is independent of the math's bit-depth (unless you're using garbage image-editing software). Edits are computed using the full bit-depth, and therefore benefit from having the additional precision available. – aroth Dec 23 '15 at 05:19
  • @WayneF: Achieving 256 smooth-looking gray levels requires non-linear mapping of numbers to physical brightness, but accurately applying linear effects like filtering requires linear mapping. If one wants to blur a checkerboard where half the pixels are 100% brightness and half are 0% brightness, the result should have 50% brightness, even though on an 8-bit display that should probably represent a level substantially below 128. – supercat Dec 23 '15 at 16:31
  • 2
    @supercat One thing that may be confusing this, is that gamma correction for display is actually irrelevant here. You might apply gamma correction to your image for effect, but you would not apply display color correction to your image data. You'd leave that as an independent, display-dependent process, unrelated to your image. All that matters is in your image you set e.g. rgb=(1.0, 0.5, 0.0) if you intend to represent "orange", leave it up to the renderer (display, printer, etc.) to do hw-dependent color correction in any way it sees fit, to whatever precision it is capable of. – Jason C Dec 23 '15 at 20:17
  • 2
    (In other words, nothing wrong with talking about gamma curves, it's just a boring arbitrary mathematical operation that can be applied to anything for any reason, just like brightness and contrast. Just be careful not to conflate image data with display calibration [which just happens to also often involve gamma correction, in addition to other things].) – Jason C Dec 23 '15 at 20:20
  • @JasonC: A linear brightness scale will need more than 8 bits of precision to avoid visible banding on the low end; one could map a 12-bit linear scale to 8-bit non-linear scale for display purposes, however, without visible banding. – supercat Dec 23 '15 at 23:28
  • 2
    @supercat I believe that you are conflating the storage medium's image data format, the viewer's rendering format, the display's color correction, the display's hardware capabilities, and human perception - all separate, fairly independent stages in the pipeline between your image file and your brain - and I can not make any further sense of this conversation, sorry. – Jason C Dec 24 '15 at 01:09
  • actually the 14-bit version will have 64 lines between each blue lines and has far more information than 8-bit one. – phuclv Dec 24 '15 at 03:49
  • The 14-bit raw data is of monochromatic luminance values that are not directly comparable to RGB values of either 8/24 or 16/48 bits. – Michael C Jun 17 '17 at 00:10
42

Higher bit depths give you more options for editing without losing data.

Don't make the mistake of tying the representation of an image with how it is rendered. Editing yields the best quality results when you operate on the representation, where the underlying data has the highest resolution. It just so happens that your monitor provides a lower resolution view of the image but this is not tied to the quality of the underlying representation.

If you recall from school math, there was always a rule of thumb: Never round intermediate calculations when computing results; always perform the math then round at the end when you present the results. The exact same thing applies here. Your monitor is the end, where the "rounding" takes place when presenting it to you. Your printer may "round" differently. But in all intermediate steps you use the raw data for the most accurate results, and you store the original high resolution representation on disk so you can maintain that information and continue to do accurate editing later.

Consider this: Say you have a 5760 x 3840 source image. You'd maintain the most editing and rendering flexibility by editing the image at that size and leaving it that size. If you happened to be viewing it on a 1440 x 900 monitor you'd just zoom out in your editor, you probably wouldn't actually resize and resample the data to get it to fit. The same exact thing goes for color resolution.

Audio is similar. Perhaps your computer's sound card only has 12-bit output capabilities. But if you record, store, and operate on 16-bit or 24-bit audio, you could make a low volume signal 16x or 4096x louder (respectively) and still achieve minimal loss of output quality on that computer. Convert down only at the end when you're about to present the final result. The visual equivalent is brightening an extremely dark image with minimal banding.

No matter what the capability of your monitor is, if you perform an editing operation, e.g. multiply the brightnesses by 2, you want to perform that on the original high resolution representation of the image.


Here's a simulated example. Let's say you took a really dark picture. This dark picture is the top row below, with simulated 4-, 8-, and 14-bit per channel internal storage formats. The bottom row is the results of brightening each image. Brightness was multiplicative, scale factor 12x:

enter image description here (Source, photographed by Andrea Canestrari)

Note the permanent information loss. The 4-bit version is just an illustrative example of an extreme. In the 8-bit version you can see some banding particularly in the sky (click image for expanded view). The most important thing to note here is that the 14-bit version scaled with the highest quality, independently of the fact that its final output form was the 8-bit PNG I saved it as and of the fact that you are likely viewing this on an 8-or-less-bit display.

Jason C
  • 583
  • 3
  • 11
  • 1
    Or even a 6-bit display. Not all LCD monitors actually display a full 8-bit depth per channel. – Random832 Dec 23 '15 at 18:12
  • @Random832 is there a reliable test to know what your LCD is capable of? I have a computer generated gradient image that shows banding, but I've never been sure if that was due to my eyes being able to see 1-level differences or if my monitor was distorting it. – Mark Ransom Jan 17 '17 at 22:42
  • @Mark Check out this nice write-up on the subject: http://www.avsforum.com/forum/465-high-dynamic-range-hdr-wide-color-gamut-wcg/2424330-determining-display-panel-bit-depth.html#/topics/2424330?page=1&_k=co450l -- it can be tricky, there's a lot of places for bottlenecks in the signal chain from your video output to the light coming out of the screen, a lot of misinformation in specs (e.g. advertised depths being BS because of a 6-bit decoder on some random circuit board) and edid descriptors, etc. It's a complex system and knowing the actual depth isn't a common use case, so, good luck! Ymmv – Jason C Jan 18 '17 at 02:03
  • 1
    @MarkRansom what made it clear for me was that I could see banding at clearly defined boundaries, every fourth level. Some displays do dithering which can be somewhat trickier to identify – Random832 Jan 18 '17 at 03:29
  • ^ Also note that some displays do temporal rather than spacial dithering, which is probably near impossible to notice when done properly, but you might be able to spot it in dark areas if you have keen eyes. – Jason C Jan 18 '17 at 04:32
4

14bit Raw does not correlate to your monitor's bit depth. Raw is a format that is minimally processed. See Raw Image Format.

Raw format allows post processing software such as Lightroom and Photoshop to make fine adjustments to images that would not be possible with JPEG files.

As far as the monitor, wide-gamut monitors are usually 10bit and have an internal LUT that stores calibration information from calibrators like X-Rite or Spyder. Your video card needs to be able to support 10 bit as well.

For Nvidia chips, workstation class cards support 10bit. Most, if not all Gaming-class cards do not from my expereience. It is similar with AMD chips sets.

If you are not going to post-process your images, then you can easily switch to JPEG.

Massimo
  • 103
  • 3
Gmck
  • 187
  • 5
  • it's worth noting that in almost all cases the human eye will not see more than 8 bits anyway, except for rare smooth gradients (mostly synthetic, as opposed to natural noisy photos, where the posterization is hidden in the noise) – szulat Dec 22 '15 at 20:45
  • 8 bit is really only 256 shades, and not enough to display smooth gradients without dithering. – Gmck Dec 22 '15 at 21:04
  • 2
    true, but such gradients almost never can be seen in the real life photos because of noise – szulat Dec 22 '15 at 21:11
  • 1
    @Gmck: There's a huge difference between 0.39% brightness and 0.78% brightness. A 256-level logarithmic curve would be enough for smooth gradients, but many filtering effects essentially require a linear mapping of values to brightness (so replacing two pixel values with their average will leave the overall brightness unaffected). – supercat Dec 23 '15 at 16:35
1

You should maybe read this question first.

How does the dynamic range of the human eye compare to that of digital cameras?

Basically, the dynamic range of paper is less than 8 bits, and the dynamic range of the human is not dissimilar.

The advantage of high dynamic range in RAW images is that you can post-process them to bring the bits you're interested in within the range that the display device can represent - which in turn relates to what the human eye can see.

So the classic example is an room interior with sunlight outside. As the human eye switches from looking at the interior to the outside, the iris contracts to reduce the amount of light coming in, allowing you to see outside details as well as interior details.

A camera doesn't do that, so you'd normally have to expose either for the room interior (and getting blow highlights), or for the outside (getting an underexposed interior) - or take two shots and make an HDR composite.

The higher dynamic range of Raw allows you to take a single shot, and selectively 'push' or 'pull' certain areas to reveal detail that's in those over/under-exposed areas.

The shots here show this kind of scenario. https://www.camerastuffreview.com/camera-guide/review-dynamic-range-of-60-camera-s

Roddy
  • 1,838
  • 12
  • 23
  • 3
    ...is that you can post-process them to bring the bits you're interested in within the rnage that the human eye can see. More accurate to say that you squish the bits you want into the range that the monitor can display. The human eye has even more dynamic range than even a 14-bit RAW image. It's not about what the eye can see, it's about capturing all of that dynamic range so that it can later be compressed into the display dynamic range of a standard video device. – J... Dec 23 '15 at 00:27
  • @J... Update. I agree, mostly: Display device dynamic range is what it is because of the human eye. A 14-bit display device would be pointless. The human eye has huge dynamic range ability because of it's ability to adapt to different lighting conditions (much like a camera's exposure mechanism) – Roddy Dec 23 '15 at 12:03
  • 2
    No, display dynamic range is what it is because it is technologically difficult and expensive to make it better. A 14-bit display would be amazing. More dynamic range means a bigger colour space - more vibrant, colourful, and accurate images. My primary display, for example, is internally a 12-bit (albeit via lookup) panel and can produce 99% of the AdobeRGB colour gamut. The difference between that and a normal 8-bit (with usually about 6-bit effective) sRGB panel is unbelievable. More dynamic range is always better. – J... Dec 23 '15 at 12:31
  • 1
    dynamic range is unrelated to color space and sRGB coverage, calibration and "bits" are here for precision, not for displaying more colorful pictures – szulat Dec 23 '15 at 12:39
  • 1
    @J... https://en.wikipedia.org/wiki/Adaptation_(eye) "in any given moment of time, the eye can only sense a contrast ratio of one thousand." = 10 bits. – Roddy Dec 23 '15 at 12:41
  • @szulat It's definitely related to the accuracy of the colour that can be displayed. More bits mean that without sacrificing colour accuracy you can represent a broader spectrum of colours. A longer CIE space distance between, say, red and green means you need more resolution to decribe the colours in equal increments along that (now) longer line. The two are intimately related. – J... Dec 23 '15 at 12:47
  • 1
    @Roddy Yes, but there's more to the equation than absolute bright and dark. As above, it is also about colour resolution. – J... Dec 23 '15 at 12:51
  • 1
    @J...: your eyes are actually pretty lousy imaging devices. What is responsible for pretty much everything good is our brains' visual cortex. – whatsisname Dec 23 '15 at 22:11
-3

The 'Wikisperts' forget that whatever bit depth you process in, you ONLY see the result in 8 bit. Stick a 3bit file (8 levels) into your 8 bit system and the display will show 8 levels (256/7 = 0 to 7) 0 to 255 in steps of 36. A 4 bit will show 16 (0 to 15). Stick a 10, 12 or 14 bit file in you will see 256 levels. Your video card will convert the 1024, 4096 or 16,384 levels down to 256. This is why, watever RAW file you load, as soon as it is offered to your video processor it becomes 8 bit(256) levels. I worked in Medical physics, most imageing departments now have 12 bit imaging for breast screening and the like. However, the human eye can't detect better than 900 ish levels so software is used to detect minute changes in tissue density so if you meet someone who has a 10, 14 or 14 bit system, they will be heavily in debt and mega dissapointed. Incidently, we also struggle to detect changes in colour, our vision rolls off below 16Million colours unless minute changes in a similar hue, where we notice banding. Our camera's are capable of some 4 Trillion colours but like many things, whats theoretically possible and actually possible can be two very different animals.

Bob_S
  • 1
  • 1
    What you see with a 8 bits monitor isn't what you have in your 14 bits file, so what? As stated in the previous answer, more information seems to always be better... – Olivier Oct 15 '17 at 18:42
  • I'll keep it simple. Take your pics in raw, Produce your jpg's from your raw file. To see the advantage, compare your jpg with those produced by the camera. It's the difference between a pro and rubbish lens. – Bob_S Oct 18 '17 at 17:45
  • Can you explain your argument about lens ? To me it has nothing to do with this discussion : having 12 bits of dynamic range and choosing what you want to keep after post-processing is absolutely not related to lens quality. And yes, you can see 12 bits of dynamic range on a 8 bits screen, just play with EV corrections ! – Olivier Oct 19 '17 at 18:16
  • No you can NOT. Your 8 bit display will display n/256 or 256/n levels, depending on whether you offer a smaller or larger file than 8 bits. We can adjust the point at which those bits are selected by adjustments in PS but we have NO CONTROL over which bits are displayed, i.e the gap between the bits will be the same, so data missing!. If we had, we ( the NHS for one) would not bother spending £46k on 12 bit imaging equipment that gave no better than 8 bit images. – Bob_S Oct 21 '17 at 09:33
  • I wonder what you don't understand about being able to exploit a higher-than-visible dynamic range to create an image. If you have a file with a 12 bit dynamic range, you can choose to display any 8 bits range you want, it is that easy. If you were a photographer, you would get how important this is : having details in highlight and in the shadow is the dream of everyone. I won't elaborate further on the subject, please read the previous answers. – Olivier Oct 21 '17 at 11:15
  • How do you select the bits you want if you can't (and you at least agree ) SEE THEM. You can choose a part of the spectrum but the dynamic range will be the same, just further up (or down) the spectrum, you gain one end then lose the other, its a effin 8 bit display you are looking at. Now please go away. – Bob_S Oct 21 '17 at 15:37
  • How? You compress the DR. Instead of mapping the middle 256 values out of 16,384 possible values (NOBODY does that), you select a black point (say, 2,096), a white point (say, 14,335), and then you map the remaining values into 0-255. 0 to 2,096=0; 2,097 to 2,144=1; 2,145 to 2,192=2; 2,193 to 2,240=3; 2,241 to 2,288=4; 2289 to 2,336=5, 2,237 to 2,284=6; 2,285 to 2,332=7; 2,332 to 2,380=8; 2,381 to 2,428=9; 2,429 to 2,476=10; 2,477 to 2,524=11; 2,525 to 2,572=12; 2,573 to 2,620=13; 2,621 to 2,668=14; 2,669 to 2,716=15; 2,717 to 2,764=16; 2,765 to 2,812=17... and so on until 14,336-16383=255. – Michael C Jan 09 '22 at 11:52
  • Of course this would result in a very flat picture. That's why you apply curves to the finer gradations before you transform them into 8-bits. – Michael C Jan 09 '22 at 11:54