30

From Wikipedia:

The other form is where the 8 bits directly describe red, green, and blue values, typically with three bits for red, three bits for green and two bits for blue. This second form is often called 8-bit truecolor

What is the reason only 2 bits are used for blue as opposed to red or green? Does it have to do with human perception of color or was it just an arbitrary decision?

hspil
  • 403
  • 4
  • 5
  • 5
    RGB16 has the same preference and assigns an extra bit to green, for the same reasons. – tofro Sep 05 '19 at 14:35
  • 5
    Digressive question, not in any way a comment on this question: when Wikipedia asserts "[t]his second form is often called 8-bit truecolor", is anybody else tempted to throw in a 'citation needed'? I can't figure out whether I've just been out of the loop on terminology. – Tommy Sep 05 '19 at 15:56
  • 13
    @Tommy: I'd call it "8-bit direct color" (https://en.wikipedia.org/wiki/Color_depth#Direct_color). "True color" usually means RGB888. – fadden Sep 05 '19 at 18:32
  • 1
    @Tommy With 8 bits only, "true" is a very relative term. The standard term I know for 16-bit color is "high", and "true" starts at 24-bits color resolution. – tofro Sep 06 '19 at 07:30
  • 2
    @Tommy Wikipedia should have another term "I don't believe it". "Citation needed" should be used for things that I might believe but that isn't common knowledge so I'd like to see it written down somewhere. "8 bit truecolor" is a ridiculous term. – gnasher729 Sep 07 '19 at 21:30

3 Answers3

60

Because the human eye is less sensitive to blue colour.

It's also more senstitive to green than red, so depending on the number of bits available modulo 3 :

  • 0 : The same number of bits is used for each colour (example: 24-bit)
  • 1 : The extra bit is accorded to the green colour, which is the one the human eye is most sensitive (example: 16-bit)
  • 2 : The extra bits are accorded to green and red, as the human eye is less sensitive to blue (example: 8-bit, 32-bit)
Bregalad
  • 1,866
  • 11
  • 21
  • 19
    Also 1: the extra bit is assigned to all three channels. This is rare, but used for example in the SAM Coupé's 7-bit palette. – user3570736 Sep 05 '19 at 07:20
  • 12
    @user3570736 Interesting catch. I guess it makes sense to get "pure white" or "pure gray", which is complicated on systems whose number of bits is not the same among channels. – Bregalad Sep 05 '19 at 08:46
  • 17
    @user3570736 not that rare actually, lots of 8-bit computers have 4-bit RGBI colour (where I is the "intensity" which basically increases all three channels). – Muzer Sep 05 '19 at 15:24
  • 4
    @Muzer RGBI still works in the Microsoft color command for cmd.exe. – Monty Harder Sep 05 '19 at 15:43
  • 14
    RGB555 is a thing because RGB565 has the impure grey level issue. The high bit in RGB555 is simply ignored. Some systems will use ARGB1555/RGBA5551 to get basic transparency for something like 3D textures, but 16-bit texture support isn't too popular. – fadden Sep 05 '19 at 16:03
  • 1
    Should have been 4/2/2 YCbCr with Cb and Cr values interpreted as -2,-1,0,2. :-) – R.. GitHub STOP HELPING ICE Sep 05 '19 at 20:19
  • 2
    Or get fancy and convolve Cb and Cr with neighboring pixels so that dithering gives you smooth blended colors. – R.. GitHub STOP HELPING ICE Sep 05 '19 at 20:21
  • 2
    More specifically, the human eye is less sensitive to the intensity of the light in the blue part of the spectrum, largely because there are relatively fewer blue-sensitive than green- or red-sensitive cones in the retina, but it is more sensitive to its wavelength. – Leo B. Sep 05 '19 at 21:28
  • 1
    OK, experimentally Cb should range over [-2,-1,0,2] and Cr over [-2,0,1,2] and the results are pretty spectacular. – R.. GitHub STOP HELPING ICE Sep 05 '19 at 21:32
  • There's a nice sample image about this fact on the 16-bit color page on wikipedia: https://en.wikipedia.org/wiki/High_color – Ray Sep 05 '19 at 21:47
  • 1
    I wasn't aware of a 32 bit color mode, at least for PCs or PC monitors. There is a 30 bit mode, (10 bits per color) in some video cards, but I don't know if this mode only works for VGA type monitors (like the old CRT monitors). Older games like Tomb Raider Angel of Darkness (2003) support 30 bit color mode. I don't know if current game support this. – rcgldr Sep 06 '19 at 02:34
  • @rcgldr You're right indeed, my example was just theoretical. At this point where there's 10-bits per channel, the precision is so high that if a 11th bit would be added, nobody would notice any difference. Actually I think 8-bit per channel (24-bit total) is already fine enough so that it's impossible for an human to see the difference in the vast majority of cases. – Bregalad Sep 06 '19 at 06:41
  • @Bregalad - There may be or were graphics programs that use 30 bit color, but I'm not aware of any digital monitors that supported this, only CRT type monitors. For HDTV, the 4K OLEDs use 10 bits per color panels (HDR10), 4k players down convert what is claimed to be 12 bits per color down to 10 bits. I don't know if any 4k LCD based TVs have 10 bit color panels. Most broadcasts are 8 bits per color, although some streaming services claim 10 bits per color. – rcgldr Sep 06 '19 at 13:10
  • the fascinating theories as to why green: detect ripe fruit among foliage, https://en.m.wikipedia.org/wiki/Evolution_of_color_vision_in_primates – dlatikay Sep 06 '19 at 15:58
  • 1
    That “the extra bits are accorded to green and red, as the human eye is less sensitive to blue (example: …32-bit)” somewhat confuses me: when thinking 32-bit I primarily imagine 24-bit RGB + 8-bit alpha (I don't know how popular other 32-bit samplings are). – Sasha Sep 07 '19 at 10:49
  • @fadden: An easy approach for solving the "impure gray" problem with 6:5:5 would be to say adjust the RGB brightness values that (62,31,31) would be pure white. This approach would offer the advantage of allowing the 1024 values whose top six bits were all set to be used for other purposes such as smooth gray scales, indexed colors, split pixels, or transparency. – supercat Sep 07 '19 at 18:11
  • @rcgldr: Rendering using 10-bit or 16-bit linear values for R, G, and B, and then displaying the result using a non-linear 8-bit representation may yield better results than would be possible using any kind of 8-bit representation for rendering. – supercat Sep 07 '19 at 21:17
  • @rcgldr: If a system used an 8-bit linear color representation, the differences between e.g. #010000 and #010100 would be much more noticeable than the difference between #FFF000 and #FFFF00. To mitigate that, 8-bit color systems use a gamma curve which reduces the difference between 00, 01, and 02, while amplifying the difference between F0 and FF, and the expense of making it so that averaging the numerical value of two pixels won't average the brightness. Using more precision avoids the need to use a non-linear representation of brightness. – supercat Apr 19 '21 at 18:54
18

What is the reason only 2 bits are used for blue as opposed to red or green? Does it have to do with human perception of color or was it just an arbitrary decision?

It's a combination of being practical constraint by 8 bit words (*1) and adapting this to human physiology. The human eye is most sensitive to green and red (with a little tilt toward green), while least sensitive to blue (*2,3). See also this answer regarding why green is quite common.

For computing the most simple solution is to assign the colours in a colour word to an independent bit, leaving the misery that 8 cannot be divided by 3 without rest (*4). So next best is assigning a different amount to each. To do so the assignment with the least loss is to be selected, and that's where human physiology comes in again and tells that a reduction of resolution for blue is the least offensive. So the bit gets dropped there.

Bregalads Answer offers a nice and easy to use cookbook to decide what colours should get which bit length assigned when using dedicated bit fields.


Now, looking at the way we perceive light, It may be way more appropriate to use a palette instead of bitfields. Here colours (including brightness) can be specified in a way more natural way - just, handling them isn't as easy and cheap (*5).


*1 - This is where 'octal' word sizes would give a more 'natural' representation. Considering that computing started out with the majority of designs using multiple of 3 word sizes (see here as well), today's ubiquitous use of colour graphics seems like staircase wit.

*2 - Colour is a continuum, and the human eye has got certain spots of maximum sensitivity, but they are not hard focused, but ranges with a main peak. These ranges even overlap quite a lot between red and green while overlap between green and blue is small. Also the distances between are not the same.

To make it even more complex, Human vision also has a fourth component we usually assume with light and dark (B&W). While it has a quite wide sensitivity, it also has a peak just at the border between green and blue, partly filling that gap - one of the reason why we experience bluish light to be brighter than any other.

As result, we do see brightness on its own (with peak at 500 nm) plus three colours, but they are as well in different grades of brightness and all of them overlapping - perfect for neuronal networks, isn't it?

Remember that white/black/blue/gold dress?

*3 - See also this answer regarding why green (and amber) screens were most common

*4 - One of the disadvantages of being a Human. (Many) Birds feature four colour sensors, the additional being in near-UV. In addition, some have filters within their colour regions 'sharpening' colours. Something we seem to have lost. So they see colour not only more differentiated, but as well separated from brightness.

*5 - While light itself is defined on a linear scale and presented of a mixture of discrete values, human vision is not. Just have a look at the various colour scales that have been invented to describe human vision and you'll see that there's not an easy way to describe it in 2 or 3 dimensions at all. It's been a topic since ancient times and may go on forever.

LаngLаngС
  • 1,596
  • 1
  • 9
  • 19
Raffzahn
  • 222,541
  • 22
  • 631
  • 918
  • 3
    Some (very few) humans have four distinct cones, too! Even amongst those with "normal" colour vision, there's variation in relative sensitivity across the spectrum; our colour representations are something of a compromise to fit an approximately average person. – Toby Speight Sep 05 '19 at 17:42
  • @TobySpeight yes, you're right. Not to mention that colour sensitivity changes with age as well. Only in average we're Joe Average :)) SCNR. RGB colour with intensity on the colours is indeed a very weak approach. Still, colourful :)) – Raffzahn Sep 05 '19 at 18:01
  • 1
    I'm sure that the Acorn Archimedes' hybrid approach — 8bpp colour in which 4 bits specify brightness and 4 bits are a lookup into a colour palette — is definitely recognition of the peculiarities of human vision as noted here, and not just a way to save some transistors. – Tommy Sep 05 '19 at 18:16
  • 1
    @Tommy It for sure is a different way to capture more dynamic with limited resources.Then again, it won't replace a full 256 entry palette that could be made using the same screen data. – Raffzahn Sep 05 '19 at 19:04
  • 1
    @Tommy: I'm not familiar with the Archimedes, but a common approach for machines that produce a composite video output is to generate a composite video signal by delaying a square wave by a certain amount controlled by some bits, filtering it, and adding a DC bias.controlled by some other bits. A true-color composite signal should allow the amplitude of the chroma signal to be adjusted, but 1970s-1980s computers typically only offer on/off control. The Atari 2600 is the first machine I know of to use this approach, and it was adopted not because it would allow 128 beautiful colors... – supercat Sep 07 '19 at 17:55
  • 1
    ...but because the designers wanted to have enough gray levels to be usable on black and white sets, and have a few "good" colors available for various purposes. The fact that the palette offers nice chroma gradients rainbow effects turned out to be an unexpected bonus which wasn't exploited until the system had been out for years. – supercat Sep 07 '19 at 17:57
  • @supercat The Archimedes VIDC1 outputted analogue RGB and had 4 bpc for each of red, green and blue. In 1, 2 and 4 bit colour, a 16 entry lookup table gave you all 12 bits to feed to the DAC. In 8 bit mode, the framebuffer byte was split into two; the bottom 4 bits went through the lookup table, like in 4 bit mode, and then the top 4 bits replaced the top 2 bits of green, and the top bit of each of red and blue.

    Thus, with an all-black lookup table, you got red, blue, 2 shades of green, and the mixtures of those 4. Acorn set the lookup table to get a reasonable colour cube by default.

    – Simon Farnsworth Mar 21 '23 at 17:54
4

One way to avoid this imbalance is to divide the total palette by something other than a power of two when assigning colour channels. This is done most famously by the Web Safe Palette, which is a uniform 6x6x6 RGB cube totalling 216 colours. For comparison the RGB332 palette is an 8x8x4 RGB cuboid.

Because there is no integer cube-root of 256, there are a few palette entries left over from this process. These can be used to cover spot-colours which would otherwise need to be dithered to be displayed accurately with the base palette.

Chromatix
  • 16,791
  • 1
  • 49
  • 69
  • 3
    What you're describing isn't truecolor/direct color, it's a specific type of indexed color. – Mark Sep 05 '19 at 20:56
  • @mark Yes, it's not (strictly), thus not an direct answer, but nonetheless valuable information added which goes beyond what a commend is good for. – Raffzahn Sep 05 '19 at 22:32
  • @Mark: Generally the 6x6x6 color cube is implemented using a 256x18 or 256x24 lookup table, but that's an implementation detail. I'd consider it "direct" color in a drawing system that makes no effort to handle or exploit any color outside the 6x6x6 cube. I don't know of any machines that use a hard-wired mapping between 8-bit values and RGB values in the range 0..5, but it wouldn't be overly difficult. Use the top two bits to identify cases which of (rgb) have values over 3. 00=none; 01=red and either green or not blue; 10=green but not red; 11=blue but not green. – supercat Sep 07 '19 at 17:49
  • Use two bits each for rgb if 0..3; for the 4..5 case, use one bit to select between 4-5 and the other to indicate whether the "next" color is also 4..5. Using a 32x6 ROM would be cheaper than using a bunch of AND and OR gates, but what's important is that a hardware 6x6x6 color cube mapping would be feasilble. – supercat Sep 07 '19 at 17:51