This is more of a math questions, but I thought I had a better chance of finding a camera enthusiast with a math background here.
I've always flat fielded images using a two dimensional array with a gain for each pixel. Therefore I end up with a flat field table of A x B for a camera of A x B pixels. I recently started working with a camera that claimed that they had built in flat fielding, but the gain table they produced contains only two 1D arrays, A + B.
I was a little surprised, but I've surmised that the gain table they produce is made up of an average gain in the horizontal direction and an average gain in the vertical direction, divided by some constant. When the camera wants to flat field, it simply takes the vertical and horizontal gains based on the pixel location and multiples the raw pixel value by both gains. I tested this and it looks reasonable and the image is indeed flat fielded.
What I am looking for is the math to back up what they are doing in order to prove it to myself. If I place my origin at the center of the image and I know the image fall off is constant due to vignetting, I'd like to be able to calculate my own two 1-D arrays and just get a better understanding of the math behind the system.
Thanks!