1

ISO is been around since analogue photo times. Any, it should have been uniform for obvious reasons - you could never know what will be captured in each part of the film. So, you go uniform (ISO 100, ISO 200), select appropriate one for given conditions and do your best. Fast forward to today and we have same approach. Despite the fact that in digital photography "you know" or, rather, image sensor knows what will be captured by each pixel or group of pixels. Here's the question, why it couldn't measure sensitivity per area or, even, per pixel and have nonuniform ISO settings across the sensor? What prohibits having ISO level automatically set per pixel?

user56548
  • 19
  • 2

3 Answers3

3

Stop and think about what you are saying. Assuming one were willing to spend the prohibitive cost to do what you are proposing, what would the result be? If every pixel were adjusted using ISO so that it is exposed "properly" you would wind up with an image where every pixel is the same brightness! There would be zero contrast. None. You would not be able to see details of anything that is a uniform color. And don't even think about going monochrome! You would wind up with an image that looks like an 18% gray card.

hm... maybe i've formulated incorrectly if that's the perception. I'd rather imagine the results along the HDR-like outcome where highlighted areas would have 100 ISO applied and dark areas would be treated with, maybe, ISO 800 while mid one would get made with ISO 200. in other words, as HDR is achieved by combining outcomes through altering exposures - in ISO based scenario it would be achieved through combining multi-ISO application.

The purpose of HDR isn't ultimately to reduce noise. It is to take a scene with a wider dynamic range than can be captured by current cameras and/or displayed by current display mediums and squeeze that additional dynamic range into the space allowed by the display medium. Increasing ISO doesn't increase dynamic range. It decreases dynamic range because it takes half as many photons to reach the equivalent of full well capacity.

HDR, at least in a digital environment to which you seem to be referring, is much more complicated and involved than what you state. Ultimately the tone mapping that must be done is more a manipulation of local contrast versus overall contrast. There would be little to gain by shooting the bright areas at ISO 100 and the dark areas at ISO 800 - because the areas in which ISO 800 creates the most noise are the shadows! Brighter areas look perfectly fine at ISO 800. If you amplify one part of the picture more than another, then you still have the problem that some parts of the dark areas that are still darker than the darkest parts of the bright areas will look brighter than those parts of the bright areas.

agreed, it is much more complicated process that i could ever dare to represent. However, it is still exposure based for sourcing the information to work with. I'm simply saying that while i generally understand aperture and exposure as relevant and applicable in unified way to overall image capture i can't (yet) to understand reasons for ISO being applied the same. In film - it is clear for obvious reasons cos otherwise you'd need to create dedicate film for each capture. But for digital - there's still something that goes against the logic in my mind.

To take advantage of various ISO sensitivities on discrete parts of the sensor you'd have to find a way to alter the exposure time on a pixel by pixel basis. So now you are requiring sensors that can provide global electronic shutter. Even when the entire sensor collects photons for a universal time period they are very expensive to produce. The computing power to do what you propose would raise the cost of such a camera astronomically. Creating a sensor chip that could follow the instructions resulting from such computations would probably exceed the budget of most third world countries.

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • hm... maybe i've formulated incorrectly if that's the perception. I'd rather imagine the results along the HDR-like outcome where highlighted areas would have 100 ISO applied and dark areas would be treated with, maybe, ISO 800 while mid one would get made with ISO 200. – user56548 Sep 09 '16 at 08:59
  • in other words, as HDR is achieved by combining outcomes through altering exposures - in ISO based scenario it would be achieved through combining multi-ISO application. – user56548 Sep 09 '16 at 09:24
  • HDR, at least in a digital environment to which you seem to be referring, is much more complicated and involved than what you state. Ultimately the tone mapping that must be done is more a manipulation of local contrast versus overall contrast. – Michael C Sep 09 '16 at 09:31
  • There would be little to gain by shooting the bright areas at ISO 100 and the dark areas at ISO 800 - because the areas in which ISO 800 creates the most noise are the shadows! Brighter areas look perfectly fine at ISO 800. If you amplify one part of the picture more than another, then you still have the problem that some parts of the dark areas that are still darker than the darkest parts of the bright areas will look brighter than those parts of the bright areas. – Michael C Sep 09 '16 at 09:34
  • The purpose of HDR isn't ultimately to reduce noise. It is to take a scene with a wider dynamic range than can be captured by current cameras and/or displayed by current display mediums and squeeze that additional dynamic range into the space allowed by the display medium. Increasing ISO doesn't increase dynamic range. It decreases dynamic range because it takes half as many photons to reach the equivalent of full well capacity. – Michael C Sep 09 '16 at 09:37
  • agreed, it is much more complicated process that i could ever dare to represent. However, it is still exposure based for sourcing the information to work with. I'm simply saying that while i generally understand aperture and exposure as relevant and applicable in unified way to overall image capture i can't (yet) to understand reasons for ISO being applied the same. In film - it is clear for obvious reasons cos otherwise you'd need to create dedicate film for each capture. But for digital - there's still something that goes against the logic in my mind. – user56548 Sep 09 '16 at 09:38
  • To take advantage of various ISO sensitivities on discrete parts of the sensor you'd have to find a way to alter the exposure time on a pixel by pixel basis. So now you are requiring sensors that can provide global electronic shutter. Even when the entire sensor collects photons for a universal time period they are very expensive to produce. The computing power to do what you propose would raise the cost of such a camera astronomically. Creating a sensor chip that could follow the instructions resulting from such computations would probably exceed the budget of most third world countries. – Michael C Sep 09 '16 at 09:48
  • Nah... i'd not touch exposure. Simply put i'd like to get auto iso measure not for whole image but parts of it or even, if possible, pixels and set those individual setting set. So we'll have area with 50, most of it say 100 and few parts at 200. That's the best i could describe i guess :) – user56548 Sep 09 '16 at 14:03
  • You'd have to adjust exposure time or you will have different exposure levels for the different parts of the photo... – Michael C Sep 09 '16 at 18:01
  • Right, i will have different exposure levels due to different ISO values. As i could make a photo with same A, same S but different ISO levels - that what i would envision as outcome. Imagining a simplified matrix 9x9 i'd have first top line at ISO 100, then 400 then 800. Reason being parts of the image located in shade or less lit areas and hence being uplifted through ISO. – user56548 Sep 13 '16 at 12:08
0

ISO of a sensor involves what voltage it is running at. To modify this in just an area or pixel would require a lot of complex wiring (how can the manufacturer guess that you want to increase the ISO of pixel 432,288) and a lot of added interfacing and programmin. In simple terms, such a capability would be a very costly feature rarely used.

Skaperen
  • 772
  • 4
  • 9
  • Ok. Couldn't there be set a reference level set for pixel or area and once measured - voltage is appropriately adjusted. It just seems that uniform gauge across whole image is so... simplistic and old school taken blindly from film days. – user56548 Sep 09 '16 at 08:08
  • And, btw, it is already done today by auto iso mode.Question is - why applied to all pixels or whole frame instead of areas or even pixels. – user56548 Sep 09 '16 at 08:12
  • Auto ISO only takes the total light metered in the scene and sets the ISO (e.g. sensor amplification) following a set of rules to insure the Tv and Av used stay within preset ranges. – Michael C Sep 09 '16 at 09:51
  • great, that's auto iso. what i'm saying - it still could be (logically) making sense to do it per pixel or, at least, pixel groups. – user56548 Sep 09 '16 at 11:09
  • @user56548 "Couldn't there be set a reference level set for pixel or area and once measured" ... and how is that measurement taken? What is it taken relative to? That measurement is just taking a picture of the scene, where all pixel amplifier gains are set equivalent (i.e., identical ISO). If you use that reference image to take another image with higher or lower gain to tone-map the dark/light regions, then voilà, you have a form of HDR. – scottbb Sep 09 '16 at 11:53
  • @scottbb it might be simplified. But, for example, same measurement that happens when AUTO ISO is getting defined. Only at more granular level (pixel/pixel group). Would it work? – user56548 Sep 09 '16 at 12:32
  • @user56548 No, not really. The sensor isn't involved in determining autoexposure values (including ISO). So how can you determine the locality of the reference levels/values when the locality isn't known? – scottbb Sep 09 '16 at 12:38
  • @scottbb Probably, and that's speculation maybe, by having a layer on/in sensor that does it? In smaller regions and providing coords along? Sounds like reasonable challenge for engineering/R&D. I don't know solution unfortunately. If i would - i'd answer with how-to's. I'm challenging logically current setup as it is follows same design as film (meaning uniform ISO) which, IMHO, might not be relevant or should not be as it is not a constant element anymore. – user56548 Sep 09 '16 at 13:18
  • @user56548 Nothing you're asking about is unachievable from a technical standpoint, but it's not done because it doesn't really make sense. You can't cheat the physics of the system. the system (the photosites of the sensor; the A/D converter; the voltage gain amplifier) all have resolution limits, sensitivity limits, etc. Combined, those describe the dynamic range of the imaging system. You can't create more dynamic range or information than is there or that the system can capture (although, there are tricks, such as HDR, to "fake" the desired effect). – scottbb Sep 09 '16 at 13:31
  • exactly... that's the same feeling i'm having. Doesn't make sense. That receiving side of today's photography (digital sensor) uses same "physics limits" as invention/creation from 1880s with uniform ISO settings. I'll take 3 photos with same AS and different ISOs and i'm seeing areas on each 3 where iq is higher than others in same parts. And nothing can be done. – user56548 Sep 09 '16 at 13:50
  • 1
    @user56548 You don't seem to understand - the quantum physics of light hasn't changed since the 1880s (at least not very much). – Michael C Sep 09 '16 at 18:05
  • it's the "uniformity" part that i can't find logical explanation, not quantum physics. I'm kind of beating a dead body here but it makes sense (uniform ISO) when you don't know upfront what's going to be captured - today it's not the case anymore. You actually "see" how light is going to end on sensor/photo. – user56548 Sep 13 '16 at 12:11
0

I think you may be asking about uniformity of response, in which case most if not all cameras have built-in correction tables to remove dark offsets for each pixel and gain differences across pixels. This guarantees that the signal output from every pixel for N input photons is the same.

If you're asking about adjusting ISO so dark parts of the scene get more gain, then you'd be asking for dynamic gain adjustment on a per-pixel basis. Even then, as Michael Clark pointed out, you'd end up with at best a very washed-out image.

If you have a scene with huge contrast (dark to brightest), there is another option. Expensive, but available: sensors with logarithmic amps. This gives the pixels a huge effective dynamic range, and you can then post-process the log scale output to do whatever you want.

Carl Witthoft
  • 1,887
  • 12
  • 11
  • yeap, it's more about dynamic gain i guess. As for "washed-out" i'm not sure about that cos' those areas in shade or less lit will be more pronounced and clear. And today's cameras do a great job on noise control at high ISO. So far seems it boils down to "how to localize ISO measurement" and "how to implement pixel-based or pixelgroup-based amplification". Both seem to be tangible engineering tasks IMO. – user56548 Sep 13 '16 at 12:17