0

Starting point of this question is, for a hyperspectral camera in a satellite, where one is interested in macro level details to see crop health, water pollution issues etc. Is it better to go for lower GSD or higher GSD?

Now there are so many factors for a design, so my question simply is what is good, bigger pixel or smaller pixel, when one is only interested in bigger pixel worth GSD, considering all factors to be same.

My claim is that data averaged over smaller pixel will be worse than one bigger pixel, below is my reasoning. Am I correct? Or should I consider other factors too?

For simplicity, assume a CCD with pixel width = a, and another CCD with N pixels with width = a / N.

The quantity of interest is the averaged value of light over the width a. Now is it better to collect light over N smaller pixels and then average the value, or just have one large pixel to get the average directly?

My Analysis:

  • Let Energy received in width a be E
  • Energy received per smaller pixel = E/N
  • I am not sure the workings of CCD, but assuming it has something to do with a capacitor I assume that value assigned to a pixel is proportional to the voltage, and voltage is proportional to sqrt(Energy Received)

If we assume gaussian noise, with variance=s in measured quantity, that is voltage. Also, since the energy is being distributed across N pixels, this will cause the "resolution" per voltage to be smaller in each smaller pixel, which will cause noise of N^2 * s in the value given to a pixel, and then averaging over N samples will make noise as N * s.

Thus, the noise of averaging the smaller pixel is higher than getting directly average value in a bigger pixel. Of course, I am ignoring the constant factors here, and just working with proportionality. I guess there indeed will be a break-even point considering other engineering factors.

But if I assume noise in the energy collected, then the noise variance will come out to be same in the end.

tripleee
  • 125
  • 1
  • 6
zephyr0110
  • 109
  • 3
  • 1
    This question is not about space exploration. It will be better suited on electronics SE. –  Apr 14 '22 at 07:45
  • Well I would like to know what people do in payloads. For example to get a hyperspectral image. Should one go for higher pixel size or smaller one. Smaller is better but will cause some noise addition and other issues such as higher exposure time. So if one only wants macro analysis of a land, is it worth to go for a smaller pixel assuming all other factors to be same. I edited the question to justify the motivation of posting it here –  Apr 14 '22 at 07:55
  • If we have a spy satellite, we will need very good resolution, so more pixels is better. But for agricultural survey the average values over wast areas are the key usually, we don't need to resolve any orange tree. Also high resolution satellite surveys have large volume, and satellites have limited capacities to store and to downlink the data. – Heopps Apr 14 '22 at 08:49
  • 1
    What you need are good pixels. As an expert once said to me, building a hyperspectral imaging system really shows you how bad your design is since any little optical anomaly will be blindingly obvious in the data analysis. –  Apr 14 '22 at 12:47
  • @zephyr0110 The problem you have is "probably" specific to satellite applications because almost all such systems are "pushbroom linescanners" rather than "staring arrays". It is significant because a multi/hyperspectral imager will have its spectral bands arranged as second/third/fourth lines where as a camera for photography is a staring array with some kind of filter masking, such as the answer from Romeo Ninov you have already received below. – Puffin Apr 14 '22 at 19:42
  • 1
    I don't think Space SE was too bad a place to look for an answer but as you've already been booted off that it will be interesting to see what comes up here on on Photography SE. If you have no luck still you could try Earth Sciences SE as there could well be folks there who know there way around your problem. – Puffin Apr 14 '22 at 19:45
  • By the way, if you haven't read up on Time Delay Integration (TDI) and how it applies on moving satellites with line scanners then that may help with your problem. Basically you can collect more light by having more TDI stages and gives you an extra variable to play with. – Puffin Apr 14 '22 at 19:46
  • I’m voting to close this question because it is about using a camera as a measuring instrument rather than as a tool to produce a photograph as the end result. – Michael C Apr 14 '22 at 21:33
  • 1
    @puffin Yeah I am aware of TDI. Recently I read that a private satellite is launching a hyperspectral payload with lower GSD than ever put by NASA. Now I also read that hyperspectral camera are difficult to make given the fact that each “bin” wavelength has less photons to capture, and thus higher exposure time to get good enough SNR. With that, a question cropped up in my mind, If I indeed have high pixel density can one improve the SNR by averaging. – zephyr0110 Apr 15 '22 at 13:22
  • @MichaelC I agree that Photography SE probably isn't the right place. I suggest, rather than closing the question, that we give it time find the right SE. I've stepped over from Space SE as it was originally raised there but I couldn't comment there because it had already been transferred to Photography SE by the time I saw it. I don't know if Earth Sciences or Space or Electronics SE is the right place. – Puffin Apr 15 '22 at 19:46

3 Answers3

1

Let me point you to additional information which (INHO) will change the equation. As you can see on the images you have a lot of cells.
enter image description here
But the area of 4 cells is not equivalent of the area of one big (N=2 in this case). The problem is around each cell you have part which is not photosensitive (around the cell) and this area is not 4 times bigger when the cell is. So when having small cells you loose photons in this direction also. The best comparison will be to compare the physical size of cells (and subtract the surrounding area).

Romeo Ninov
  • 12,030
  • 4
  • 31
  • 49
  • IDT this is accurate with modern designs... due to gapless micro-lens arrays over BSI photosites very little light is lost due to photosite spacing, regardless of how small they are. – Steven Kersting Apr 14 '22 at 16:24
  • @StevenKersting, I am not sure about the camera in question, OP talk all the time about CCD where AFAIK there is no BSI. – Romeo Ninov Apr 14 '22 at 16:49
  • yeah, if this concerns cooled CCD sensors as used in astro telescope photography, then IDK either. – Steven Kersting Apr 14 '22 at 16:57
  • 1
    Astro camera do not usually have a Bayer mask at all. They have filter wheels that take successive images with the full sensor filtered for various wavelengths/color bands. – Michael C Apr 14 '22 at 21:30
  • @MichaelC, the image is just example :) – Romeo Ninov Apr 15 '22 at 04:26
1

With larger pixels, the hardware is locking you into a specific way of collecting the light value over a pixel. With smaller pixels you can make a choice in software - you can simply average the pixels to get the equivalent of a larger pixel, or you can apply more sophisticated filters to get a different result.

Mark Ransom
  • 1,495
  • 10
  • 12
  • Yeah, that is my question, whether averaging the pixel will lead to a more accurate result compared to one large pixel – zephyr0110 Apr 15 '22 at 13:14
  • @zephyr0110 averaging in particular will give nearly identical results to using larger pixels, because that's what the larger pixel does in hardware. My point was that smaller pixels give you more choices than just simple averaging. – Mark Ransom Apr 15 '22 at 19:48
  • How do you say it will give identical results? Any references? Any analysis? My claim in question is that it gives worse results and I showed with some analysis.. did I do that correctly. That is my question – zephyr0110 Apr 16 '22 at 03:45
1

The main thing I think you are not considering is that smaller photosites require a lower conversion gain (# of electrons collected) and are therefore more sensitive to lower light levels... And the exposure, or "resolution" as you called it(?), is the voltage (electrons) accumulated vs the maximum value possible... it is a ratiometric value.

Also, the noise isn't gaussian, it is poisson in character; the SqRt of photons received, and thus the SqRt of photo electrons gained. This would seem to be a disadvantage for the smaller photosites, but in practice with modern (CMOS) sensors the main variable is light per image area and not light per pixel/photosite (for any equivalent output). Although I do not know if the same is entirely true for modern CCD sensors.

Steven Kersting
  • 17,087
  • 1
  • 11
  • 33
  • Can you share any reference for reading more about it? – zephyr0110 Apr 15 '22 at 13:13
  • @zephyr0110, maybe these will help: http://spiff.rit.edu/classes/phys445/lectures/gain/gain.html https://learn.sparkfun.com/tutorials/analog-to-digital-conversion/all https://en.wikipedia.org/wiki/Shot_noise – Steven Kersting Apr 15 '22 at 19:27