Comparing the Canon Powershots G3X vs. G5X, both can do RAW, but only the G5X can do HDR. My understanding is that I should be able to get all the effects of HDR with proper postprocessing of RAW photos. Is my understanding accurate? What do I get with a camera that can do HDR vs. one that cannot?
5 Answers
Yes and No.
Taking a single RAW, you have more dynamic range than a single JPG, so you have a limited 'high dynamic range', depending on the camera's capabilities.
For a less limited HDR, you need to do bracketing - you shoot a series of identical compositions, while changing exposure, for example -4, -2, 0, +2, +4. This allows you to compose those shots in post-processing, and thereby cover a much larger dynamic range.
Those bracket series could be JPG too; generally, using RAW is better, but it doesn't disable HDRs.
Many high-end cameras support this bracketing automatically.
- 2,022
- 1
- 11
- 17
-
What might be a better approach to 'bracketing' is using the camera's burst mode, so the shots are taken in rapid succession with no need to adjust settings in between. Then you can just add multiple images for higher exposure. – R.. GitHub STOP HELPING ICE Oct 02 '17 at 03:07
-
"Many high-end cameras support this bracketing automatically." I've even seen mid-range smartphones with HDR functionality, so it's getting quite popular. Shooting RAWs though is quite new in them. – Mast Oct 02 '17 at 04:42
-
@Mast bracketing means capturing multiple images with different exposure, not doing HDR – phuclv Oct 02 '17 at 04:56
-
1Since the final HDR is typically (8-bit) JPEG anyway, doesn't the automatic conversion from RAW to JPEG scale that 12 bits of dynamic range into 8 bits already? – Michael Oct 02 '17 at 05:36
-
1@R.. When we say that high end cameras "support bracketing", that's exactly what it means - you set a bracketing mode (ie: # of shots and bracket interval) and when you shoot you hold the shutter and it will burst fire all shots, automatically changing the exposure compensation between shots. Having to manually change exposure between shots is only necessary on cameras that do not support bracketing. – J... Oct 02 '17 at 13:47
-
@wizzwizz4 That's just wrong. 12-bits vs 8-bits precisely means a higher dynamic range. The problem with in-camera automatic JPG conversion is that it is not terribly intelligent and does not generally optimize the dynamic range compression in the way that a human would (or could) when manually processing a RAW image. In-camera JPG conversion typically wastes a lot of the dynamic range that you could recover with manual RAW processing. – J... Oct 02 '17 at 13:50
-
@J... Isn't range the difference between one end and the other (of a spectrum)? If so, wouldn't (shouldn't?) JPEG conversion simply reduce dynamic precision and not range? – wizzwizz4 Oct 02 '17 at 16:19
-
1@wizzwizz4 No. You're thinking that JPG conversion is simply squashing the entire dynamic range of the 12-bit image into an 8-bit format. That's not how the camera does it. To preserve contrast detail and exposure in the image there is usually a good amount of truncation that happens - shadow and highlight detail are simply chopped away rather than crushed into the 8-bit image. If you took the full dynamic range of the RAW image and squished it into an 8-bit format the picture would not look very good at all. This is why RAW processing is so important - you have to do it in just the right way. – J... Oct 02 '17 at 16:23
-
@J... Thanks for the correction. Now I have a new informed reason to convince amateur photographers to use RAW. – wizzwizz4 Oct 02 '17 at 16:25
-
@wizzwizz4 Indeed. In any case, bit depth always represents exactly the dynamic range of a signal (be it audio, video, or whatever). More bits means that you can precisely represent signals with larger intensity variations. 24-bit audio, for example, can accurately represent the sound (and loudness) of a mosquito and a jet engine on the same track. To fit that into a 16-bit audio file you either have to make the mosquito louder or the jet engine quieter, either way you lose the actual dynamic range (the real difference in loudness between them).... – J... Oct 02 '17 at 16:30
-
@wizzwizz4 ...That or you simply lose fidelity (ie: you clip the jet engine to static or lose the mosquito in the noise). This is the same in an image as clipping highlights (blowing out your image) or losing shadow detail to a big black blob. The JPEG processor will often do the latter while with RAW processing you have the option of "quieting" the highlights in some areas of the image while boosting the shadows in the dark areas. You compress the dynamic range locally but you preserve, at least, the fidelity of the image. – J... Oct 02 '17 at 16:31
An in-camera HDR operation can automate the process of creating images that express more dynamic range than a camera is capable of capturing with a single exposure.
Cameras
The dynamic range a camera can capture with a single exposure is often expressed as stops. Because a change of one stop increases or decreases exposure by a factor of two, each stop of dynamic range equates (more or less) to one-bit. So a rough rule of thumb is: a camera with 12 bits of information in its RAW format has a dynamic range of 12 stops (in a perfect world and many other things being equal). [1]
Images
On the other hand, the dynamic range of the jpg image format is (more or less) 8 bits or 8 stops. [1] This means that shooting in RAW allows capturing more dynamic range than is possible when shooting JPG and when post processing the RAW the photographer can pick and choose how the additional dynamic range maps into JPG or other 8 bit output.
Eyes
The human eye has a static dynamic range of about 6.5 stops and so the roughly 6 usable stops of JPG can often represent the dynamic range of a scene as experienced by a human. However, the eye has a total dynamic range of over 40 stops and so a scene might have many possible experiences in human vision depending on which 6.5 stops of dynamic range a human is experiencing due to the condition of their eye (e.g. moving from dark to daylight or vice versa). Likewise the 12 stops of a RAW file are static and by adjusting ISO the camera can capture images across a wider range of lighting conditions (albeit only 12 stops at a time). JPG and other image formats can be thought of similarly. This makes specialties like astrophotography practical.
High Dynamic Range
HDR is a technique for simultaneously representing different portions of a scene as might be experienced by humans with their vision adjusted to various static dynamic ranges. For example the sky portion of a scene might be represented as might be experienced by a human eye adjusted to long exposure to bright light while other portions might be represented as might be experienced by a human eye adjusted to darkness.
HDR uses multiple images exposed to different static portions of a camera's total dynamic range. For example one image may be exposed to capture detail in the shadows and another exposed to capture detail in the highlights. The photographer can then choose how to combine the differently exposed images to express their photographic intent in the 8 stops of JPG's dynamic range.
In camera HDR simply automates the creation of multiple images at different exposures and combines them based on assumptions about photographic intent. These assumptions may or may not match a specific photographer's goals.
Remarks
Like everything in photography, in camera HDR is a tradeoff that may or may not be useful for a particular photographer. My guess would be that in-camera HDR is a critical deciding factor for only a lesser fraction of photographers. And among those with a keen interest the tradeoffs of in camera HDR versus those of creating HDR images in post processing are a factor.
My intuition is that a tripod is probably more useful to creating HDR images than in-camera processing in many photographic situations because HDR uses multiple images. [2]
[1]: Because the most and least significant bits map to black and white (which is which depends on the big or little endianness of the data format) it might be useful to think of 12 bits as equating to 10 stops, 8 bits as equating to six stops, etc.
[2]: Though multiple cameras shooting simultaneously at different exposures is another way of automating HDR where a tripod may be of less benefit.
If your camera is capable of saving raw image data files to be used by a post processing editing application, then anything that can be done using an in-camera HDR setting can also be done using the same source files off camera.
When compared to a recent version of pretty much any HDR application the in-camera version will almost certainly be more limited with regard to the latitude of choices the user can make regarding how the raw data is interpreted in the final image.
In-camera HDR is a convenience feature. It takes the raw data from ever how many exposures it uses and converts the raw data in a way that uses more of the dynamic range from the raw image data to produce a jpeg than is normally the case when a single raw file is used to produce a conventional jpeg. It makes choices already programmed into the camera to interpret the data in the raw image file(s) a certain way.
Which gets better results, in-camera or post processing application, is a lot like the same comparison between in-camera jpegs vs. raw conversion in post processing: it depends primarily on the skill of the user and the capabilities of the application. For someone who doesn't have a lot of skill and experience, the camera will probably produce a better image. In the hands of a skilled photographer using a capable application, the post-processing option will usually produce better results. This is particularly true the more difficult and problematic the scene is.
- 175,039
- 10
- 209
- 561
No, that's not true. HDR means High Dynamic Range, it affects what difference in the level of light you can get without clipping effects. RAW just means that you can set the sensor multiplication afterwards, therefore you get the ability to set the brightness of the picture after shooting it, but only to a certain degree (same as with changing the ISO of your camera) without loosing detail, in contrast to a JPEG, where you lose a lot of detail whenever changing the brightness. Therefore you can be a bit more tolerant with your exposure settings when shooting RAW, because you get the ability to correct them afterwards. With an HDR-Image, you have less clipping in the dark and bright areas, but this does NOT affect how bright the image is in general. If you want to take images under extremely contrasty conditions, you need HDR, RAW won't help you.
- 111
- 3
-
1The G5X has a feature which automatically brackets three exposures and combines them in camera. That's what's being referred to here. – mattdm Oct 01 '17 at 14:57
-
-
1@DLCom Your definition of HDR appears to be extremely limited. What's the difference between “Fake HDR” and real, bracketed exposure HDR? – Michael C Oct 02 '17 at 02:44
-
1@DLCom A single 12-bit raw file can contain as much dynamic range as a -3, 0, +3 jpeg series can. When converting raw to a raster format image one can choose to set the white and black points at any point in the full DR of the raw file. One is not constrained to select only an eight stop range from one end, the middle or the other end. One can squeeze the entire 12-bit data into an 8-bit space by reducing the difference between the darkest and brightest points. Since this looks a little flat whether the source is a single raw file or the 3 jpegs, most HDR apps apply tone mapping. – Michael C Oct 02 '17 at 02:49
-
1@MichaelClark No, it can't. Sensor scale is linear, while human eye (and jpg) scale is logarithmic. This means that the purpose of that 12 bits is to discard lots of useless data (eg absurdly detailed dark end, while bright end has its range already maxed out) to get the 8 bits you actually want. Sure, once you apply enough postprocessing power you can reimagine the details lost (like sharpening, what every cellphone does on the fly), but that doesn't mean that the information is retained. A jpeg from RAW exposed at +3 has more detail than one from RAW pulled up 3 stops via postprocessing. – Agent_L Oct 02 '17 at 11:25
-
@Agent_L 12-bit and 14-bit sensor readings are monochromatic luminance values. Everything in a color jpeg derived from a Bayer masked sensor is, as you put it, reimagined. Setting the black point and white point to be wider or narrower does allow for details from a wider or narrower DR scene to be expressed in the narrower DR of the display medium. It's not much different than the way Ansel Adams modified exposure and development times to fit 11-12 stop dynamic ranges in his negatives into the 6 stop range of his printing papers. – Michael C Oct 02 '17 at 20:44
Some Nikon cameras (elementary DSLRs) have an option named D-Lighting that adjusts the lighting in a single shot. That can be archived manually by post processing a raw image. Some mobile phone cameras have the same feature, but call that HDR! I don't think that is the case with that compact camera. HDR mode in G5X, takes 3 consecutive images and then combines them in a single image. If you want the same effect in post processing, you have to use bracketing to take multiple images with different exposures.
- 103
- 4