I attended the North Eastern Astronomy Forum this year and one of the speakers was mentioning that JunoCam will not be working because of the angle of the spacecraft. He was giving a lecture on what amateurs could do to replicate its functionality.
During this lecture he mentioned that JunoCam is using a very simplistic banding on its camera sensor. Instead of going the route that a lot of DLSR's go which utilizes Debayering they decided to do a banded sensor approach. I suppose this is analogous to having multiple monochrome cameras that detect different wavelengths of light. The wavelengths that they chose were, red green and blue with 2 additional NIR spectrum filters. So, basically, their camera takes single shots capturing:
- The top X rows of pixels as red light
- The next X rows of pixels as blue light
- The next X rows of pixels as green light
- The next X rows as near infrared
- The next X rows as Methane (?)
The end result of capturing each of these spectrum in monochrome is an image like this:
So, essentially, to get a full-color image, you need to slice these images on the lines multiple times and put the data into the correct channels. Then you need to stitch each channel back together to get the full image and proceed to do additional processing on each piece to correctly align them. I'm not even mentioning planetary derotation and orbital motion compensation as this is already a decently complicated process.
My question is:
Why did they choose such a complicated methodology of image capture, when they could've used an easier method? The whole banding of channels is really bizarre for a camera, I've heard of nothing quite like it before (let me know of other instances where they've done this). I understand JunoCam was indeed an afterthought, but if you know why this specific design choice was made I'd love to hear more about it.
Here's the main site: https://www.missionjuno.swri.edu/junocam



