0

I wanted to understand more about higher precision color representation (bit-depth). IMO shift from 8 bits to 16 is long overdue, we use 8bit since VGA and now use many times more pixels but same quality of a pixel. As now 10bit monitors/tv become available and (hopefully) more bits too, I did a web search and had not find much. E.g. Can I use 10bit effectively today and if yes how? (from 2017) says:

If you decide to upgrade, special video cards and drivers are needed to use more than 8-bit color. That pretty much guarantees hours of fiddling to try to get everything working. Outcomes include thinking it's working when it's not, but being unable to tell the difference. Or simply giving up and settling for 8-bits. If you ever do manage to get it working, people will continue to send you JPEGs even though you've insisted they send only HEIC or BPG (or PNG or WebP or EXR). They will also complain about not being able to open your files or about the colors in your images being "off" because they weren't considerate enough to also upgrade their equipment to display 10-bit color. (Or perhaps worse, they will compliment you on how warm the colors in your images are when you had intended cool tones...)

The question is about bold part. I was surprised, isn't 10bits HEIC vs 8 bits is just 2 extra bits to add more precision to color intensity and to display 10bits on 8bits hardware one just drops 2 bits? How such drop can change "warmness"?

Martian2020
  • 163
  • 5

1 Answers1

2

8bit is capable of reproducing nearly 17 million colors, but a human is only capable of seeing/discerning approximately 11 million colors... 8 bit is not the limitation.

Likewise, modern DSLR's/cameras have 14bit processors; but most of the time the camera is only generating around 8-10 bit data... even in optimal conditions most barely exceed 12bit in any aspect, and I don't know of a single one that ever exceeds 8bit color currently.

For the most part, it's just marketing hype.

What is more relevant is the color space those bits are used to represent. And the issue with non-standard/non-tagged images is that most systems will assume them to be 8bit sRGB; and that is where the color shifts occur.

Steven Kersting
  • 17,087
  • 1
  • 11
  • 33
  • Thank you for confirmation of the problem. I'm looking for more technical detailed explanation why viewers does not discard extra 2 buts. E.g. if one reads a file of UTF -16 symbols each 2 bytes assuming ASCII of one byte per symbol it is not "shift" occurring, but IMO complete gibberish. Why is there still a more or less correct picture displayed, not gibberish? For e.g. HEIC. – Martian2020 Nov 24 '21 at 20:13
  • @Martian2020 I'm sorry, but your comment is partially gibberish. Perhaps there is a language issue or barrier here. UTF-16 is a text encoding scheme that happens to use 16 bits of information. ASCII is also a text encoding scheme (in the strictest interpretation, it's 7 bits). ASCII and UTF-16 have nothing to do with image data encoding. – scottbb Nov 24 '21 at 21:35
  • 1
    @Martian2020, I guess the issue is not that the colours will be off. Normally, yes, the lowest 2 bits would just be truncated, all else being equal. But all else is not always equal: some higher-bit file formats are still relatively new and have less/poorer support, raising the possibility for the software (or the driver) to do something wrong, like ignoring or misinterpreting colour profile data. What makes it worse is that the change will often be subtle and may not be immediately obvious, unlike misinterpreted UTF-16. – Zeus Nov 25 '21 at 02:31
  • "8 bit is not the limitation", I have "ordinary" 8bit color monitor, I and a another person I showed it to was able to distinguish border between 006700 and 006600 colored rectangles. I wonder if 4 times less difference would be undistinguishable, but if not by myself, than maybe there are humans with better vision, maybe many 2 years old children could do that. – Martian2020 Nov 25 '21 at 02:49
  • "17 million colors" - maybe so, but colors are not made equal. Both 000001 and 000002 seemed just black to me, but 006700 and 006600 looked slightly different and can produce banding. – Martian2020 Nov 25 '21 at 02:50
  • @scottbb, maybe text example was unnecessary, I just thought it would be more clear. For images if pixels data are one after another, than taking 24 bits one by one for a pixel instead of 30 would result in VERY different resulting picture displayed. Note: I don't know actual image file's format internals. – Martian2020 Nov 25 '21 at 02:54
  • 1
    @Martian2020, 006600 and 006700 are the same colors whether the color space is in 16bit or 8bit. Banding is primarily an issue of the rounding of math... i.e. if those colors were arrived at by editing in 8bit. But 006600 and 006700 are different colors in different color spaces (i.e. they are brighter colors in ProPhoto than in sRGB). And monitors have their own color spaces as well... if you see potential banding between those two colors there is probably and issue with the color space conversion at some stage (i.e. an uncalibrated monitor). – Steven Kersting Nov 25 '21 at 15:06
  • @Steven, "there is probably and issue", I would agree that probability is always there apriori before you know for sure... Than if one sees banding in 2 bits per color, another may suggest to check calibration ;-) P.S. your note that there is not one color space is useful. – Martian2020 Nov 26 '21 at 00:24
  • 1
    @Steven, not quite true. I can easily see the difference between #006600 and #006700 (side by side, of course), as well as practically all colours with such difference (except for very dark ones), even in sRGB range, whether calibration on or off. 8-bit calibration makes things just awful for gradients (because of these round-off errors), which acquire colour casts for different bands, and can sometimes hide such difference between neighbours. Decent colour calibration can only be done in 10 or more bits, so 8 bits is a limitation sometimes, in my direct experience. – Zeus Nov 26 '21 at 00:26
  • "006600 and 006700" were example of color that differ by 1 bit, in case of 16 bits the difference expected to be 256 times less in intensity (if monitor support it fully). – Martian2020 Nov 26 '21 at 00:27
  • @Zeus, also, would you like to write an answer based on your 1st comment? As I can see it pretty much covers the issue. – Martian2020 Nov 26 '21 at 00:32
  • @Zeus, " I can easily see the difference" I take you have 10 bit display? Can you (maybe not very easily) see difference between adjacent (different by 1) colors in 10 bit? – Martian2020 Nov 26 '21 at 00:35
  • @Martian2020 I'm not sure this is the problem: I don't work with HEIC (yet), so I don't feel I'm qualified to answer. I did have problems with 10-bit output, primarily around drivers (which actually put out 8 bit despite the 10-bit setting, or reset 10 bit to 6 bit(!!) randomly, and such things), but you specifically excluded it from Q. 10 bit requires a lot of attention, apart from drivers: most software are not aware of it and will always output 8 bit in reality; some software (e.g. Photoshop) have a special option to enable 10-bit output. It may also interact with hardware acceleration. – Zeus Nov 26 '21 at 00:52
  • So, I actually set 8 bit output. But my monitor has built-in calibration LUT, which is (as far as I remember) 14 bit, so I don't have round-off problems due to calibration (which, as I mentioned, are very noticeable and real in 8 bit). Either way, #006600 is by definition 8 bit, and in the absence of GPU-based 8-bit calibration to interfere, it shouldn't matter whether the monitor is 8 or 10 bit for this test. I do see the difference on the cheap consumer Acer at work; but I do seem to have above-average colour perception. Still, I doubt I'd see 1/1024th (10-bit) difference. – Zeus Nov 26 '21 at 01:08
  • Try to see the difference between 016600 and 006600, also one bit difference in 8bit. That is more how colors in the real world work; and it is how a gradation is created in an editing program. E.g. if I start with 006600 and create a gradation it doesn't only vary the green value. And again, the results do not visually differ if done in 8bit vs 16bit. But note that if you cannot see a discernible difference it is not a visibly different color and will not generate a gradation. – Steven Kersting Nov 26 '21 at 15:43