19

This article at Luminous Landscape claims that Nikon, Canon, and Sony silently boost ISO when their cameras are used with very fast lenses (f/1.2 and f/1.4 principally), the implications being that (a) you may as well use a slower lens and increase ISO yourself, and (b) this practice is shady.

I'm skeptical, but I had a hard time parsing the article. Are the authors on to something? Is this an unfounded accusation? Or did I misread the article in some other way?

Reid
  • 14,950
  • 5
  • 50
  • 88
  • 2
    I don't think this can really be answered without insider knowledge from the manufacturers -- it is just as possible that the analysis given is flawed as the camera manufacturers are doing something. – Rowland Shaw Oct 31 '10 at 18:57
  • 1
    It is true. Look at the experiment I've made with my Canon EF 50/1.4 where the camera has really boosted the ISO: http://photo.stackexchange.com/questions/43666/why-do-two-lenses-with-the-same-f-number-give-different-amount-of-light#comment73535_43675 – Sunny Reborn Pony Jan 08 '15 at 12:06

10 Answers10

23

I am also very skeptical about this article. If that was true, then opening the aperture past a certain point should not make any difference in the defocusing ability of the lens.

I tried a small experiment: these are pictures of a couple of street lights close to my home. I set everything to manual and used the exact same settings for all the pictures: same ISO, shutter speed and defocusing. Only the aperture was different from shot to shot.

blur discs

As you can see, the blur discs increase in size all the way to 1.4. Additionally, the surface brightness is about constant, which would not be the case if the ISO was changing.

Update 1: To address che's point, I tried the same experiment, but this time with the blur circles near the corner of the picture, instead of at the center. This is intended to maximize the light ray's angle of incidence. Here is a composite at f/1.4:

Composite of blur circles

The angle of incidence is maximized in the far corner, because those light rays come from the top-right edge of the aperture and fall on the top-left corner of the sensor.

There seems to be a slightly lower brightness in the corner compared to the center, but it is hard to say whether this comes from the sensor or the lens (or the classical cos^4 illuminance law). Dubovoy's article sounded like the sensor would be completely blind past some angle. I cannot assert from my experiments that there is no angle-dependent sensitivity in the sensor, but if there is, then it is far from being as strong as the article suggests. At least the claim that “the marginal light rays just don’t hit the sensor” seems to be a gross overstatement.

Update 2: I had some correspondence with the author of the article, Mark Dubovoy (not Michael Reichmann, my mistake). After trying to dismiss my evidence with bad arguments (and after my lecturing him on geometrical optics, which got him upset), he now barely acknowledges that “It may very well be that with your camera and with your lens the issue is negligible.” But he still stands by his position, believing this issue may still affect “a significant number of camera/lens combinations.

For those of you who would like to know whether their camera and lens is among this “significant number”, here is the the way to do a quick test:

  • Look for a strong light source that is small and distant. A street light can do.
  • Defocus the lens all the way to the minimum focusing distance. The important point is that the blur disc must be a lot larger than the focused image of the source.
  • Take a series of pictures at different apertures, keeping the exact same settings of focus (most important!), shutter speed and ISO.

If the blur discs increase in size with increasing aperture, then you are fine. You should then notice that the discs have the shape of the aperture (you can count the number of blades). If the size of the blur discs stops increasing past a given aperture, then Mr. Dubovoy is right, at least for your camera and your lens.

Edgar Bonet
  • 3,435
  • 20
  • 17
  • 4
    The article isn't suggesting that the camera stops opening the aperture after a point and compensates by upping the ISO, but that light loss at wide aperture due to low angles of incidence is compensated by increasing the ISO – Matt Grum Oct 31 '10 at 21:30
  • 2
    @Matt: The article says “The DxO measurements to date prove that the marginal light rays just don’t hit the sensor.” This means that at some point, even if I keep opening the aperture, the extra light rays the lens is letting in do not hit the sensor. This implies that blur circles stop increasing in size. And if there was such a light loss, we should be able to see it: the edges of the blur circles (high angle of incidence) would be darker than the center (normal incidence). – Edgar Bonet Oct 31 '10 at 22:26
  • True you'd expect to see some falloff across the CoC if marginal rays were being attenuated. THB I'm not entirely sure what the article is claiming with regards to incidence angle etc. as it doesn't state what they're actually measuring. I'm going to look at it in more detail in the morning. – Matt Grum Nov 01 '10 at 00:11
  • 3
    How about you turn the lens a bit loose, so that the electronic contact is broken and the camera can't know which lens it is mounted with. Then take a shot at max aperture and compare its luminocity with a shot taken at max aperture and lens correctly mounted. See this question&answers. – Esa Paulasto Oct 25 '13 at 17:20
10

There is a well known effect, called vignetting. It depends on lens construction (faster lenses suffer more), and also on how well is the sensor able to capture out-of-axis light rays. You can see the measurements in almost all lens tests, for example EF 24-70 f/2.8 can go as far as 2 EV on full-frame camera.

Recent Canon DSLRs have a function called Peripheral Illumination Correction, which brightens the corners in post-processing. If you want you can interpret it as "silently booting ISO", and if you don't like it you can turn it off in the menu.

che
  • 13,311
  • 4
  • 43
  • 76
  • +1 - Its worth noting that if you shoot in RAW then this post-processing is done in the RAW editor, and no information is lost. – Justin Nov 01 '10 at 05:41
  • 1
    Reichmann's article is not about vignetting. It's about an angular dependence of the sensor's sensitivity that could lead to some vignetting. However, the author focuses on a light loss that should affect the whole field with lenses faster than f/2. Vignetting, on the other hand, is a brightness variation across the field that depends more on the lens than on the sensor, and can even be experienced with lenses as slow as f/2.8. – Edgar Bonet Nov 01 '10 at 14:55
  • I wonder how angular dependence of sensor sensitivity could lead to light loss that is uniform across whole image. – che Nov 01 '10 at 15:56
  • I did not say the light loss would be uniform, I only said it would affect the whole image. Take a really fast lens and look at the light cone that hits the center of the sensor. The light ray passing through the center of the aperture (so called "principal" ray) will hit the sensor at normal incidence. Rays passing close to the edges of the aperture ("marginal" rays) will hit with an oblique angle, thus being less efficient. The effect can indeed be non-uniform except with telecentric lenses. Well, that's my reading of Reichmann's point, not that he convinced me... – Edgar Bonet Nov 01 '10 at 16:44
  • Yes. And what I'm saying is that the non-uniform part of corrections is already plainly visible in camera menu, so it can hardly be called cheating. And if there's some uniform part independent on which part of image are you looking at, there's a question why would you secretly raise ISO if you can simply account for that effect in AE calculations. – che Nov 01 '10 at 19:34
7

First off, I am VERY skeptical of the results provided by DXO-Mark. I have never understood their numbers, and I don't really think their results reflect real-world performance or behavior. They are probably extremely accurate purely scientific results, relative to their own domain, but I don't think that is helpful to normal people doing normal photographic work. My own rather cheap Canon 450D, with its pretty basic, entry-level sensor, was rated as having 10.8 stops wroth of dynamic range, and 21.6 bits of color information. I know that neither of those facets of information are true, as I most certainly do not get 21.6 bits of color information, and I have to work pretty hard to barely get 9 stops of dynamic range...I usually get 7-8 stops at best.

That said, I started getting skeptical with the article when I read the following:

When you look at the structure of CMOS sensors, each pixel as basically a tube with the sensing element at the bottom. If a light ray that is not parallel to the tube hits the photo site, chances are the light ray will not get to the bottom of the tube and will not hit the sensing element. Therefore, the light coming from that light ray will be lost. It appears from this graph that when using large aperture lenses on Canon cameras, there is a substantial amount of light loss at the sensor due to this effect. In other words, the "marginal" light rays coming in at a large angle from near the edges of the large aperture are completely lost.

[Emphasis added]

Outside of considerably older digital cameras, all digital sensors these days use microlenses above their pixels. These microlenses are designed to direct off-axis light down into the pixel well. The "marginal" light rays coming from large angles are not completely lost. Some are reflected, some are captured.

For all of DXO's talk about the accuracy of their tests, and their down-talk of camera manufacturers "cheating", they don't really tell their own customers how their own product really works. How exactly are they measuring this light loss? Is it truly accurate?

In my experience, and admittedly I have only used Canon bodies, so I can't speak for others. If I set my ISO to automatic, I get some oddball ISO values in my pictures based on the EXIF data. ISO 160, 240, 320, 480, etc. If I set my ISO to a specific value, it is always that value in the EXIF data. Granted, it is certainly possible for a camera manufacturer to truly try and cheat, tell you it is using ISO 100 when in actuality it is using ISO 200, but it is a little hard to believe they would actually explicitly change the EXIF data to hide that fact from their customers.

It should also be pointed out that ISO "settings" and actual analog readout levels are never in sync in the first place. On a Canon body, an ISO 100 is close to that, but I've seen various tests that indicate the analog readout is anywhere from 80 to 120 depending on the sensor. There have been similar tests for Nikon sensors as well (which probably apply to all Sony sensors given thats what Nikon currently use.)

I don't think the story is as cut and dry as Camera manufacterers are gaming the system. There are physical difficulties in manufacturing sensors that prevent the analog readout from exactly matching the chosen digital ISO setting, fine microlens structures that mitigate a lot of this supposed light loss at the photosite, and fairly advanced algorithms that, to my knowledge, work to maintain the accuracy of the settings you have chosen, not the other way around.

[NOTE: I would like to provide a more accurate description of what DXO-Mark actually does, however, predictably, their site is not accessible at the moment. I'll have to do some research to see if they do offer any detailed specifications or other information about exactly how their measurements work, to see if DXO-Mark are the ones trying to "game the system" as a marketing ploy.]

jrista
  • 70,728
  • 15
  • 163
  • 313
  • 2
    21.6 color bits does seem plausible... that's 7.2 per channel, which certainly is in the realm of possibility. – Reid Oct 31 '10 at 19:36
  • My sensor is only a 12-bit sensor, though. Each digital sensor outputs data at a specific bit depth, and outside of maybe some of the Phase One medium format sensors which I believe are 24 bit, no sensor that I know of actually outputs more than 16 bits of color data in RAW. – jrista Oct 31 '10 at 20:04
  • First, the microlens might mitigate some light loss but it does not eliminate it. This is clearly shown by the test results. It is this residual light loss that Luminous Landscape are talking about. Second, compensating for the light loss I would not call 'gaming' the system but rather a sensible measure to ensure the photographer gets the exposure he expects. Third, I agree that there should be disclosure and explanation. That would avoid misunderstanding and suspicion. – labnut Oct 31 '10 at 20:38
  • @labnut: I never said microlenses would eliminate it, just that they prevent off-axis rays from being completely lost. "Some are reflected, some are captured." While I do believe that camera manufacturers do some basic things to make sure the settings you select are accurately applied, I don't think it goes far enough to be considered malicious or deserving of extensive explanation to the average consumer. I would be that any such tactics ARE indeed specified in technical documents from each manufacturer, for those interested in digging and finding the info. – jrista Oct 31 '10 at 20:48
  • @jrista: I agree with you in that the article's comment "...camera manufacturers 'game the system'” seems to be unwarranted and over the top. I do however tend to trust the measurement results on the grounds that this is the best available evidence (until better evidence comes along) – labnut Oct 31 '10 at 21:31
  • @jrista, that's 12 bits per color, so if you were counting total bits per "pixel" (keeping in mind that each pixel is actually monochromatic), you could plausibly claim 36 bits of color. I haven't looked at the DxO specs so maybe they really are claiming 21.6 bits per channel, but I suspect not because it's (as you say) rather outlandish to do so. – Reid Oct 31 '10 at 21:55
  • @Reid: Ah, you might be onto something there. I was counting it as 21.6 bits per pixel. Even at 21.6 bits total, thats, as you say, only 7.2 bits per pixel, which isn't even as good as a 24bit color image. I know that I get 12 bits per pixel, or 36bit images, when using RAW with my camera. It still indicates that DXO-Mark is off the mark, and their results are very suspect. – jrista Oct 31 '10 at 23:01
  • DXO are open about the methodologies, you can read about it in great detail if you like: http://www.dxomark.com/index.php/en/Our-publications/DxOMark-Insights/Detailed-computation-of-DxOMark-Sensor-normalization the data regarding SNR etc. appears off on first inspection until you realize it's normalised to 8MP to account for differing resolutions, so it's not an absolute measurement. Why do you say you only get 9 stops DR from the 450D, how did you reach this figure? – Matt Grum Nov 01 '10 at 00:06
  • @Matt: "To eliminate bias and rounding errors, DxO Labs accurately measures ISO sensitivity and uses it as the basis for plotting all other characteristics" - The question is, exactly how? "Resolution varies from camera to camera, but ultimately images will be compared on similarly-sized screens or prints." - Similarly-sized screens/prints can be very subjective in their own right. Still not enough information that truly tells us exactly HOW DOX-Mark actually measures things, or how objective those measurements truly are...how safe from mechanical, digital, or interpretation bias they are. – jrista Nov 01 '10 at 00:44
  • @Matt: Regarding my own measurements, I meter my scenes a LOT before taking a photograph. I meter the brightest parts, the darkest parts, and a variety of middle-toned parts using spot metering before taking a picture. I usually take a few sample shots before making my final settings and snapping the photograph I intend to keep. Between differences in metered exposure settings for the brightest and darkest parts of a scene, differences in histograms for different parts of a scene, etc., in my estimates I get on average 8 stops (max 9 stops) of USABLE DR with the "Neutral" 450D tone curve. – jrista Nov 01 '10 at 00:48
  • @jrista: "The question is, exactly how?" They measure sensitivity by finding [experimentally] at what exposure value the sensor becomes saturated, according to ISO standard 12232. "Similarly-sized screens/prints can be very subjective in their own right" what they mean by similarly sized prints is that someone with a 20mp camera doesn't view their photos on a screen four times larger than someone with a 5mp camera, if you read on they then normalize to exactly 8 megapixels which is not subjective at all! IMO you can trust DXO at lot more than figures from your camera manufacturer... – Matt Grum Nov 07 '10 at 12:42
  • @jrista "I know that I get 12 bits per pixel, or 36bit images, when using RAW with my camera. It still indicates that DXO-Mark is off the mark, and their results are very suspect" when they say 21.6 bit per pixel they mean 21 bits of information not 21 bits of data. Most of the 4294967296 RGB value combinations you get with 36bit colour are impossible, due to the fact that for example red pixels are also sensitive to green and blue light, and this overlap reduces the number of different colours you can expect to get from the sensor. – Matt Grum Nov 07 '10 at 12:52
  • @jrista "Regarding my own measurements..." you have estimated the dynamic range based on your own definition of what is "usable", so you can expect the results will differ. DXO measures DR by finding the point at which the SNR is equal to 1 (i.e. the last point at which the signal is greater than the noise). This is more scientific than asking people what they consider to be usable. However I would expect the DXO scores to correlate strongly to what you experience so you can still use them, but remember to subtract 3 stops to get to what you consider usable. – Matt Grum Nov 07 '10 at 12:59
  • "Most of the 4294967296 RGB value combinations you get with 36bit color are impossible". Unless I am mistaken, a 36bit image offers a total of 68,719,476,736 distinct RGB values. That is the number of distinct shades of each color (there are only about 2-3 colors detectable by the human eye, but the eye can detect far more shades of each color). I agree, you are not going to get perfect color with a Bayer array. DXO does not clearly state what that "21.6 bits of information" is. You usually don't rate an analog signal in bits, so if they are measuring an analog, they need a better unit. (dB?) – jrista Nov 07 '10 at 19:33
  • Regarding my own measurements, they seem to be in line with what most other photography review sites (such as dpreview.com) state. I get between 7-8 stops of dynamic range on the 450D. If what you say about subtracting three stops from DXO's measurements is true, then their numbers are also in line with everyone else (10.7 stops - 3 stops = 7.7 stops, or between 7-8 stops of DR.) I am not sure where DXO states that you need to subtract 3 stops to get a real-world value for DR...if that is the case, they need to make that more obvious, as just looking at their raw data is misleading. – jrista Nov 07 '10 at 19:37
  • Regarding image size...if they "normalize" all images to exactly 8mp, then the images they compare are not objective. By scaling down images, they are applying some kind of interpolation and filtration to the raw sensor output. If their measurements are based on post-scaled images, then I can't help but question their results. If they crop, that is a different story, as cropping would at least retain as much information from the sensor as possible. Even with cropping, the image contents between sensors will not be identical, so you have another problem. Again, exactly what DXO does here is... – jrista Nov 07 '10 at 19:40
  • ...rather vague. They state many times that they normalize all images to an 8mp image 'printed' on an 8x12 (20cm x 30cm) image at 300dpi. They also state that it is "exactly" 8mp, however an 8x12@300dpi print is 8.6mp (3600x2400). If DXO has a page somewhere where they clearly and precisely state exactly how they do things, and it all adds up, they can clear their name with me, and I'll accept their results. But after digging through their site for a while, I have not found such a page. (Please send a link if you know of such a page, I would be very interested in reading it.) – jrista Nov 07 '10 at 19:43
  • “For all of DXO's talk about the accuracy ... they don't really tell their own customers how their own product really works”. +1 – sastanin Nov 10 '10 at 11:40
  • @jetxee there's pages of info about what the measurements mean and how they are taken, it's accessible from the "learn more" link right on the main page. See http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Noise and http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Light-transmission and http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Resolution and http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/ISO-sensitivity + http://www.dxomark.com/index.php/en/Learn-more/Glossary for starters – Matt Grum Jan 27 '11 at 18:47
  • @jrista "DXO does not clearly state what that "21.6 bits of information" is" they do actually: As with tonal range, color sensitivity is a number with no unit, so instead we consider Log2 CS, which represents the number of bits necessary to encode all distinguishable color values. They also define distinct colours as ones where the difference between the colours exceeds the colour noise. Here's the relevant page: http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Color-sensitivity – Matt Grum Jan 27 '11 at 18:56
  • @jrista I am not sure where DXO states that you need to subtract 3 stops to get a real-world value for DR they don't, I'm saying that your figures seem to differ by three stops. As everyone has their own definition of what "usable" DR is. But once you know how your own sense of DR differs you can use the DXO-mark figures to compare cameras and adjust your expectation of "real world" DR accordingly. – Matt Grum Jan 27 '11 at 19:04
  • @jrista "They state many times that they normalize all images to an 8mp image 'printed' on an 8x12 (20cm x 30cm) image at 300dpi. They also state that it is "exactly" 8mp, however an 8x12@300dpi print is 8.6mp (3600x2400)" you're misrepresenting what they say here, the exact quote from DXO is: "we chose a reference resolution equal to 8 Megapixels, which is a bit less than a 12" x 8" print with a 300dpi printer" you are correct that 12x8 300dpi is 8.6MP, but note the use of the words "a bit less"! – Matt Grum Jan 27 '11 at 19:10
  • ...cont Resolution affects perception of noise, and normalization of variables not under investigation is absolutely standard as part of any scientific process. I'm at a loss as to why resampling images would invalidate the results, as this is exactly what a printer would do when given two images of different resolutions. Finally there is no one page which explains everything but all the information is there if you look for it. – Matt Grum Jan 27 '11 at 19:14
  • @Matt: Well, this is a bit of an old thread, so I'll skip past most things. My earliest and biggest beef with DXO was the simple fact that they were/are using a PRINT to measure SENSOR DR. Print DR is FAR less than sensor DR, and several degrees removed, so I don't see how they could possibly generate any useful information from a print that remotely represents the sensor that took the picture. "Normalizing" the image before printing is just another factor in separating the value of a print evaluation as a useful measure of sensor DR. – jrista Jan 27 '11 at 21:31
  • @Matt: Hover over the "Print" button on this page: http://dxomark.com/index.php/en/Camera-Sensor/Compare-sensors/(appareil1)/483|0/(appareil2)/680|0/(onglet)/0/(brand)/Canon/(brand2)/Nikon. "This tab displays the print performance measurement values and graph derived from a RAW image after a normalization step that transforms all images, regardless of original resolution, to an 8Mpix image. The print size we have chosen is a standard 300dpi 8"x12" format, which corresponds to about the physical size of an 8Mpix image printed at 100% magnification." – jrista Jan 27 '11 at 21:32
2

If I understand Mr. Dubovoy correctly, he forwards the idea that by increasing the aperture size, the incident angle on the sensor increases (faster lens w/same focal length). With a larger incident angle the sensor detects is less intensity. To suggest that the size of the aperture affects the incident angle at the sensor is technically incorrect - ridiculous. The incident angle at the sensor is determined by the geometric relationship between the focal length and the sensor size. The size of the front aperture has no effect on the incident angle (assuming equivalent focal length and sensor size). If he is suggesting something else, the article is so poorly written that I have no idea what he is trying to say.

Further he goes on to state that the increased angle causes ‘marginal’ rays to be lost off the sensor affecting the depth of field. He states the loss of this information does not produce the desired out of focus blur. Finally he says, considering all of this, one should just save the money and buy smaller lenses.

Boy did I waste big bucks for that big glass. All that increased bokeh that I thought I was seeing is just my failing eyesight. I’ll blame Adobe for that. Too much keyboard time and not enough time out in the uV rays. Them (sp) uV’s scatter at the retina and produce great focus somehow, I’m sure.

If any of this off axis attenuation theory was true, it would be observed in increased vignetting with faster lenses as others have suggested. Them (sp) sinister digital camera companies going around changing ISO without telling us. Sue them for hurting our feelings. Class action that’s the way. Lawyers get big bucks while us minions get $1.50 after filling out a form and using a 44 cent stamp. Oh, I forgot about the equivalent exposure tests I performed on film comparing my big class against old small lenses. The ISO didn’t change with the aperture size – or did it? The film must have molecules in it that determine the aperture size and compensate the ISO. The film companies are involved in the conspiracy too. Get them all - more money for the lawyers.

AxO Labs needs to be careful who they authorize to use their material. I don’t understand their data and what it is supposed to prove. I would think they would fully explain the data on their web site and clarify this article. Until then I consider the third symbol in their name to be a zero. That would make their name A times 0 or in other words, Zero Labs.

user2125
  • 21
  • 1
2

there IS some effect there, and it's easy to see it for yourself if you own a fast lens (

put your fast lens on the camera, put the camera on a tripod in a controlled lighting environment. take a picture in manual using the maximum aperture of your lens. now turn the lens in the mount, it doesn't have to be far, just enough to break communication with the camera, and take the exact same picture again.

the second picture will be less bright, because the cam doesn't know you're using a fast lens and hence doesn't apply correction. the difference is easy to see if you expose for some blown highlights - the blown area will be bigger on the brighter of the pictures. the difference will be the bigger, the faster your lens is. a 50mm f/1.8 for instance does show the effect very clearly, but it is no that strong.

aphotog
  • 21
  • 1
1

I wonder why would camera manufacturers make things that complicated. If you're in Av mode with fixed ISO and fixed aperture, you can simply use shutter speed that would correctly expose the photo (including compensation of lower light transmission). There's no need to secretly raise ISO.

che
  • 13,311
  • 4
  • 43
  • 76
  • I think you're missing the point. Which is... if you're precisely in Av mode, lens wide open at f/1.2, ISO100, whatever shutter speed... because the sensor design is not 100% efficient, you will get slight under exposure. Which you would notice. So they bump up the sensor gain ("ISO"). The reason people are a little unhappy about it, as explained in the article, is that you're not getting all your f/1.2, which is a shame if you pay for it. However the effect appears marginal, so marginal in fact that no one noticed it before. – philw Nov 02 '10 at 22:17
  • My point is that if manufacturers know about sensor inefficiencies, it makes more sense to bump shutter speed than ISO in Av mode. – che Nov 03 '10 at 09:45
1

I read that article, and I'm not sure I buy it. DxOMark provides some interesting numbers, but they don't mean a whole lot in the real world, I think, and without a lot more details on their testing process, we're as much taking their word for it. In any case, even if the camera makers are "cheating" a little, I'm not sure I care. ISO in digital is just like a marker on the dial for gain on the sensor and are, in some ways, a holdover that allows us to compare to film equivalencies. It could just as easily be a knob that we turn until we're satisfied with the exposure value. I can see that effect when the camera selects the ISO anyways as I too get some odd values.

I have to wonder had film never existed, and we were just at the dawn of photography with digital being the option, would ISO even exist?

Joanne C
  • 33,138
  • 4
  • 72
  • 132
1

I suspect we have a software developer trying to make some noise to attract attention to their software - which I have found to be less than useful for my professional work.

1

I suspect that the writer of that article does not take into account the fact that the irradiance on the sensor is realy proportional to 1/(4Fnum^2+1) and not to 1/(4Fnum^2). This difference is negligible for Fnum>=2.8 However, for smaller Fnum one must take it into account.

The ration (4Fnum^2+1)/(4Fnum^2) explains at least some of the difference between that the author expected and what was measured.

Ofer

  • 1
    Where did you get the 1/(4Fnum^2+1) from? Looks like you are integrating the irradiance in image space using the paraxial approximation to derive angles. The paraxial approximation is not good for fast lenses. The Abbe sine condition is a more reasonable assumption and yields the usual 1/(4Fnum^2) factor. – Edgar Bonet Nov 16 '10 at 14:52
0

OK do this simple test. Take a black frame with just the body cap on the camera, with an f/1.4 or faster lens mounted and with a slow f/4 lens mounted. Measure the SNR of the blackframe. You DO NOT get the same result in all the three cases, the first and last test match but the middle test gives a different result and the RAW file comes out different. Thus the manufacturers ARE applying secret boosts to gain for fast glass. The amount applied varies from body to body.