Background: I'm trying to understand the function of camera color profiles (for raw output) in terms of the hardware. I have found lots of web sites that explain the function of camera profiles from a photography point of view, but none that I can find really explain well how they relate to the actual hardware (sensor, RGB filters, amplifiers, lenses...).
Question: Wouldn't it be possible to replace all the human perception related data in camera color profiles by a probably smaller amount of data that describes the physical camera behavior, and that would allow both calculating this data and more (for example, we could calculate from it the proper profile for any illuminant)? If not, why not?
Looking at camera profiles (ICC or DCP), I see that they tend to contain rather complex lookup tables. For example, I took a look at Adobe's DCP for "Samsung Galaxy S21 Ultra Rear Main Camera". It contains quite large lookup tables that describe what needs to be done to sensor output to map it to a human perception specific color space assuming two different standard illuminants. This all feels rather ad hoc to me: A full model of the camera response to light would seemingly allow one to do this better, as well as to separate the camera profile data from whatever model of human perception is used.
I would naively assume—probably wrongly, since this is not what's done, but why am I wrong?—that if we merely measure the spectral response of the three kinds of pixels, that would give us a full color model of the camera. A color profile could then only have a table of the response of each of the pixel colors as the function of monochromatic color wavelength. One way I could see this breaking down would be if the response is not linear in the sense that f(color 1) + f(color 2) ≠ f(color 1 + color 2). But even then, would in not be better to specify these behaviors in detail instead of human color perception related derived values?
Appendix: The DCP profile contains substantially the following data items, listed by their DNG tag name:
ColorMatrix1: A transformation matrix from the XYZ color space to the device native color space, assuming CIE standard illuminant A. Not much information here, only a 3×3 matrix.ForwardMatrix1: A matrix that maps white balanced camera colors to XYZ D50, assuming standard illuminant A, also just a 3×3 matrix.ProfileHueSatMapData1: A large (90×30×3 = 8100 floats) lookup table for a hue shift, saturation scaling factor and value scaling factor that should be applied to each pixel based on its hue (90 divisions) and saturation (30 divisions), assuming standard illuminant A.ColorMatrix2,ForwardMatrix2andProfileHueSatMapData2: The same as above, but for the D65 illuminant.ProfileLookTable: An even larger (36×8×16×3 = 13824 floats) lookup table similar toProfileHueSatMapData, but to be applied later after exposure compensation and fill light stages, but before any tone curve stage.