2

I was reading a piece on the comparison of APS-C to full frame sensors. In this article, it mentioned that full frame sensors can have larger individual "photo sites", and can therefore capture more light.

Here is the quote...

It is not the number of pixels that really counts - APS-C models such as Canon's EOS 7D and EOS 550D have almost as many photosites, thanks to their 18-million pixel count - but the size of the photosites is crucial to image quality.

Bigger individual light sensors capture more light - and this means that less electronic noise is created. You notice this most as you increase the ISO setting - with this noise creating a coloured mosaic pattern that is particularly noticeable in shadow areas.

Is this true? If I control for variables like aperture relative to the sensor, would a full frame sensor get higher exposure?

mattdm
  • 143,140
  • 52
  • 417
  • 741
Scorb
  • 1,048
  • 11
  • 22
  • This isn't really the same question as the question it is supposed to be a duplicate of. – Michael C Dec 04 '20 at 07:54
  • The other question restricts the aperture of the FF lens on a FF camera to maintain the same DoF as a lens with the equivalent focal length will give on an APS-C camera. This question seems to assume the same f-number and focal length, allowing for the differences in DoF. – Michael C Mar 26 '24 at 12:54

4 Answers4

11

The full frame sensor will not be brighter under the same exposure conditions (Same light in scene, same focal length and f-number, same exposure time, etc.). It will collect more light, but it will also spread that light over an equally proportionally larger area. The brightness, which is defined as the amount of light energy per unit area, will be the same. The advantage of the larger pixels will not be in increased brightness, but in reduced noise (due to the averaging of the random nature of light - what we call shot noise - over a larger area) and increased dynamic range if the pixels are larger on the FF sensor (due to higher full well capacity for the same thickness silicon wafer).

Michael C
  • 175,039
  • 10
  • 209
  • 561
3

Exposure is per unit area — see Why does illuminance stay the same for a given f-stop even when focal length changes?. That means that if you measure exposure for a given shutter-speed-and-aperture on one half of the frame, it'll be the same on the other. So, that's what matters for exposure settings.

But, full-frame sensors do have an inherent low-light advantage.

Here's a way to look at it: digital sensors are 36×24mm for full frame, or 24×16mm for APS-C. When you take a correctly-exposed image, each square mm on each sensor gets the same amount of brightness. If you want to print at, say, 12×18" (mixing imperial and metric), you need to enlarge 12.7× from the full-frame sensor — or 19.05× from the APS-C one. A square mm from the full-frame camera becomes 1.27×1.27 centimeters, or 1.61cm². A square mm from the smaller-sensor camera becomes 3.63cm²! That means that in your final print, the same amount of light is spread out over 2.25× the area.

Of course, we don't print the smaller print much more darkly. Instead, we effectively amplify the brightness as we enlarge. Stretching the same amount of light into a larger area inherently gives worse results. When there's plenty of signal — lots of light — this generally doesn't matter, but when it's dim and there's a lot of noise, it does.

This is generally the reason that cameras with full-frame sensors are regarding as having about a one-stop advantage in ISO noise over APS-C. ("One stop" is 2×, of course.) In a digital camera, ISO is amplification, and for a given print size from images at the same ISO, full-frame images are literally amplified only half as much.

The issue with size of photosites is a different, technical one not actually related to sensor size and is largely obsolete with modern technology. See Do megapixels matter with modern sensor technology? for more. Even with infinitely-good technology, you can't beat the physical reality of the above. Bigger is always more light. However, in most practical cases, you can get to the point where smaller is good enough. (Otherwise, we'd all be carrying around 8×10" large-format digital cameras....)

mattdm
  • 143,140
  • 52
  • 417
  • 741
  • I have a question regarding your answer, because naturally when I read the answer I think about it in terms o: the same amount of light falling onto the larger sensor would in theory be more dispersed compared to the smaller sensor, but then as you mention the smaller sensor image would need to be enlarged more. However, when I would then assume that ultimately the dispersion of the light would be the same, leading to no difference. However, I was then thinking more about it and realized that perhaps this perception of it was too one dimensional, similar to paint covering an area of canvas. – vannira Nov 27 '23 at 10:31
  • So I then started to think about it in terms of water filling containers of different areas but similar depths. If the depth was the indicator of the exposure of the area, we would see that the larger container has a larger volume despite the same exposure, which would explain the increased SNR. – vannira Nov 27 '23 at 10:37
  • However, the issue I have with this analogy is that when you stretch the two containers to the same size, the smaller container would have a lower depth, so I would think the output exposure would be darker, which also doesn't quite make sense. But the analogy would also coincide with the fact that "the same f-stop, which means projecting the same amount per unit area onto the sensor". I wanted to know what your opinion is on this, because im trying to grasp why larger sensors have better SNR at the same DOF, AOV and exposure (made equal using NDS). – vannira Nov 27 '23 at 10:50
  • I think this: "the same amount of light falling onto the larger sensor would in theory be more dispersed compared to the smaller sensor" is where you're tripping up. For a given aperture and shutter speed, the amount of light is the same per area, not for the whole image circle. Your second analogy is basically right — the exposure setting determines the water depth without regard to overall size. – mattdm Nov 28 '23 at 16:07
  • Yes I see, thank you for the help. I think I can visually and technically grasp the concept better now. – vannira Nov 28 '23 at 21:49
2

If we consider film photography, to correctly expose any particular film, the same number of photons must be absorbed by the film per unit area for any particular scene. This means that a full-frame 35mm camera will need to receive twice as many photons as a half-frame camera to achieve the same results under identical conditions. Note that this does not mean that the scene your are photographing needs to be twice as bright.

The same must be true for digital sensors having the same sensitivity (which is a direct analogue of film speed). Of course, how many of the photons are actually "used" by the sensor is another matter. So, a lot more light (i.e. photons) will be required for a full-frame sensor than for an APS-C sensor having the same sensitivity. However, the lighting conditions of the scene that you are photographing will be the same.

Mick
  • 1,047
  • 7
  • 14
  • 1
    Do full frame sensors have better low light exposure than aps-c? – Scorb Mar 21 '17 at 11:43
  • @ScottF Better in what way? In your question, you effectively ask about sensitivity (exposure), and nothing else. Do you really want to know about noise? – Mick Mar 21 '17 at 11:45
  • In low light the exposure would be higher. – Scorb Mar 21 '17 at 11:46
  • If by exposure, you mean the combined effect f-number, exposure time, and sensitivity, then they would be the same. The number of photons falling on the sensor (or film) is irrelevant as far as correct exposure is concerned. – Mick Mar 21 '17 at 11:49
  • i mean the resulting image will be brighter. I have read on multiple sources that full frame sensors have better low light performance. you and i can both interpret that however we like. – Scorb Mar 21 '17 at 11:54
  • This question gives the answer that you are looking for, if at some length. The short answer is "yes". – Mick Mar 21 '17 at 11:57
  • @ Mick – Enhanced quality is likely with larger photosites. Larger photosites have a higher probability of receiving more photon hits, thus the level of the charge is greater. Higher charges requires less amplification - translates to lowered noise. Todays smaller imaging chips deliver higher quality compared to yesterday’s larger chips. The RGB filters block better than 2/3 of the light. Newer technology has some unfiltered sites; software chooses what color to record. Cameras shrank! When I was a“cub” 8X10 film was in, then 4X5, then medium format, then 35mm – time changes our tools. – Alan Marcus Mar 21 '17 at 17:31
  • @AlanMarcus The Bayer mask does not block better than 2/3 of the light. The idea that only green light gets through the green parts of the mask, only red light gets through the red part of the mask, and so on is incorrect. Even if that were true, the fact that 1/2 the bayer mask is attenuated for green light that sits in the fat middle of the visible spectrum would mean that more than 1/3 of light is allowed to pass. RAW files store 3 colors per pixel, or only one? – Michael C Mar 21 '17 at 18:11
  • When we put a red filter in front of a lens while shooting B&W film is only a narrow band of red light allowed to pass? No. Red is just allowed to pass at a higher rate than green or blue, but some of all three still makes it through. Ditto with a blue or green filter. Ditto with a Bayer mask. Ditto with the human retina. – Michael C Mar 21 '17 at 18:14
  • @ Michael Clark -- How much light do you think the Bayer mask block? I stand on the fact that it must block better than 2/3 at each filtered site. I will bet that the filters used are about the same as the 25 (strong red) 58 (strong green) 47B (strong blue), How else can the three primary colors be recorded as separate channels? Look at the spectrophotometric graphs of these filters! – Alan Marcus Mar 21 '17 at 18:52
  • The patent does not reveal the title of the filters – only that that they adjust the photosites to match the response curve of the human eye. I conclude they are sharp cutting filters that provide red, green and blue separation. Likely they leak a small percentage of unwanted frequencies. They can’t leak much or there would be to much interaction and that would be hard to deal with. – Alan Marcus Mar 21 '17 at 21:10
  • SInce the Quantum efficiency of the most efficient cameras has been measured at 58-60% obviously more than 1/3 of the light is getting through. – Michael C Mar 21 '17 at 22:32
  • Primary colors are NOT being recorded as discrete channels. Look at the spectral response curves of specific sensors. The "green" pixels also allow some red and blue light through. The "red" pixels allow some green and blue light through. Same with blue. All of the photons that get through are counted by each pixel as "one photon", not "one red photon" or "one green photon." What we call "color" is not a sensitivity to different wavelengths of light. It is how our brain interprets the difference in spectral response to different wavelengths of light by the three kinds of rods in our retina. – Michael C Mar 21 '17 at 22:37
1

The job of the photosite is to collect photon hits during the exposure. The photosites contain a photodiode and a storage area to hold the charge as it accumulates during the exposure. The more photon hits, the greater the charge. When the exposure is complete, the charge is moved into storage. Software marches the charges, row by row transferring the charges into a transfer register. Here the charges are read and converted to a voltage. Because the voltages are weak they are amplified to a useful level. Next the voltage is converted to a digital signal.

The amount of amplification that is applied is a key factor. Low charge levels require more amplification. This will be a function of the light level during the exposure and this will be intertwined with the ISO setting. If high amplification is applied, some unwanted static will be induced. This is tantamount to turning up the volume on a radio. In other words, static rides piggyback on the good signal. In digital imaging, the static manifests itself as noise. Noise is a granularity akin to grain in the conventional film imaging process.

You avoid elevated noise by keeping the ISO moderate and by getting the exposure correct. While true, smaller image sensors have smaller photosites, the imaging chip continues to evolve. The differences between the full size sensor FX and the compact sensor APS-C, becomes less noticeable.

P.S. Larger chips have larger photosites thus they collect more photon hits during the exposure. The charge in the photosite is higher thus less amplification is needed thus less noise.

Alan Marcus
  • 39,160
  • 3
  • 48
  • 91