My Nikon D5600 sensor can deliver 24MP but I configured it to just use 6MP. At which step in the process does the camera "scales" the images down?
Maybe already at sensor level? By calculating an 'average' of the signal of four neighboring pixels? Or by using only every fourth pixel?
Before or after noise reduction? Before or after sharpening?
Or do they just scale the image down, just before writing it to the SD card?
What does it do to the image quality? Will I see less noise? Will higher ISO settings be possible because of less noise?
PS: The Bayer filter was mentioned in the comments. When I think about it, that makes it even more complicated. Or interesting:
This means, to some degree, they have to scale up the image/color signal to reach that 24MP?
Then I am even more curious if, when shooting with 6MP, they first scale up (by interpolating) the data to 24MP and later scale it down again or if they use another algorithm for 6MP (i doubt) without interpolating? And if such an alternative 6MP algorithm would produce crispier results?