26

I am confused as to why this picture is not sharp. This is a portrait shot at F/29 ISO100 1/250 with a 17-85mm lens focused at 38mm. The subject distance was 1.2M. My DOF app tells me that my DOF should have been ~82.98cm or approximately 1M.

I am confused as to why this picture is not as sharp as it should be. For this screenshot, I have zoomed it to 200%. It this much blur at 200% normal?

EDIT: Some people have questions about the DOF in this shot so here is some info that can be gleaned from any online DOF calculator. My 82.8cm estimate was from an app and 1.2M was from the EXIF info, using this online tool, the DOF at F28 would be:

 Subject distance   1.2 m

 **Depth of field** 
 Near limit     0.84 m
 Far limit      2.13 m
 Total          1.29 m

 In front of subject        0.36 m  (28%)
 Behind subject             0.93 m  (72%)

 Hyperfocal distance         2.7 m
 Circle of confusion       0.019 mm

This is also correct because that is how far I actually was from the camera - 1.2M. So for this shot being out of focus, I would have to be about half a meter away from the camera, which is not what I remember. Possibly the blurring observed in this photo is not because of the subject being out of focus.

portrait F/29 ISO100 1/250

  • 7
    diffraction plays a role at this aperture – null Jul 21 '15 at 13:12
  • 8
    Definitely related if not quite a duplicate: What is a “diffraction limit”?. – Philip Kendall Jul 21 '15 at 13:16
  • 1
    Was the lens focused at 38mm or 38cm? mm seems unlikely. – mattdm Jul 21 '15 at 13:26
  • As I mentioned the focal length was 38mm. The subject distance was 1.2M – Corrupted MyStack Jul 21 '15 at 13:30
  • 11
    Nowhere in your description of your settings have you said to what distance your lens was focused. Note that this has got nothing to do with the focal length your zoom lens was set to. You may have had the lens focused at infinity for all we know. OK, maybe not infinity, but simply missing focus is a likely candidate. – osullic Jul 21 '15 at 13:40
  • 2
    "Focused at" and "focal length" are two different concepts. More here. – mattdm Jul 21 '15 at 17:16
  • 1
    It is also important to note the size of the media (or sensor) that you are using. Its one thing to be at f/29 on an APS-C sized sensor, another to be on a full frame sensor, and quite another to be using a large format camera (as unlikely as that is for this question). –  Jul 21 '15 at 21:44
  • 1
    Please don't edit answers into the question itself. (See http://meta.photo.stackexchange.com/questions/1601/should-edits-which-add-the-accepted-answer-to-the-question-be-reverted). Selecting an answer with the checkmark is sufficient. If no answer covers what you've learned entirely, feel free to add your own and then accept that. – mattdm Jul 22 '15 at 04:26
  • if you are focused spot-on then aperture does not matter for that spot. but where was the focus in this? MF or AF? a focus stack would show this better. – Skaperen Jul 22 '15 at 08:55
  • 1
    I suggest you run a couple of experiments changing ONLY the aperture; so duplicate this shot then change the aperture to f/16. Use manual focus for both so the focus point definitely doesn't change. That'll show you how dramatic the diffraction effect is for that lens / sensor. Also, unless I'm missing something, zooming to 200% will mean that that your image is being interpolated by software, which will always reduce sharpness. Never pixel-peep beyond 100% :) – Whelkaholism Jul 22 '15 at 14:20
  • "~82.98cm, or about 1m". No, 82.98cm is about .8298m. You're missing about 17% of that distance. Maybe that's within the tolerance of the camera, but personally I wouldn't consider the two values close. – corsiKa Jul 23 '15 at 16:07
  • @corsiKa No, distance from subject 1.2 m ;

    Depth of field : Near limit : 0.84 m ; Far limit : 2.13 m ; Total : 1.29 m

    In front of subject : 0.36 m (28%) Behind subject : 0.93 m (72%)

    Hyperfocal distance : 2.7 m Circle of confusion : 0.019 mm

    – Corrupted MyStack Jul 24 '15 at 04:11

3 Answers3

52

You've run over the diffraction limit. Light rays passing through a small hole will diverge and interfere with each other and a pattern emerges--a sort of banding where different frequencies/placement can cause separate rays to add up or negate each other. The smaller the opening gets, the larger this divergence/interference becomes. This pattern is called an Airy disk. When the diameter of the Airy disk's peak gets larger than the size of a pixel and the Airy disks for each pixel begin to merge, you get softness--so the higher the pixel count of your sensor and the smaller your sensor, the sooner you'll see it as you stop down.

You're generally sharper at a "happy medium". Where your gear's "happy medium" can be will change depending on the pixel density/size and lens design. With APS-C-sized sensors, which I cleverly detect you are using from the 17-85 lens reference, you probably don't want to be going over f/11 without a really good reason and a willingness to give up some sharpness. Theoretical diffraction limits will probably be in the f/5.6-f/8 range. You'll also want to find out where your lens's "sweet spot" is--many lenses perform better stopped down 1-2 stops from wide open. For example, the EF 50mm f/1.8 II's "sweet spot" is in the f/4-5.6 range.

I would also say, judging sharpness at 200% magnification is just asking for all your lenses to look like dogmeat. That's not a realistic magnification in terms of viewing distances you'd have from a typical print, and it's a rare lens that's going to stand up to that kind of test, even when used at its sweet spot. Stop pixel-peeping. Start looking at the print.

See also: http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm

inkista
  • 52,065
  • 10
  • 88
  • 161
  • Or, if you are judging what the print might look like, then: zoom the image such that it appears on-screen about the same size as it would on the print, and look at the screen from about the same distance as you would the print. That will give you a much better idea of what the final result will look like, even though that isn't a perfect method. @CorruptedMyStack – user Jul 23 '15 at 18:27
50

As mentioned in the other answers, diffraction has led to unsharpness. To put this to the test, one can attempt to sharpen the image using deconvolution by using the point spread function that corresponds to F/29. For diffraction, we have (up to an overall normalization)

P(s) = {J1[ πrs/(λF) ] / [ πrs/(λF) ] }2

where J1 is the Bessel function of the first kind of order 1,
s is the distance in the image measured in pixels,
r is the size of one pixel (typically about 4.2*10^(-6) meters for crop sensors),
λ is the wavelength of light, and
F the F-number, in this case 29.

This is then true for monochromatic light, to approximate the point spread function for the color channels we can average over some appropriate range of wavelengths. Also, one should integrate P(s) over the area of the pixel specified by s.

If we compile 3 point spread functions for the 3 color channels this way we can sharpen the image by transforming it to linear color space and applying a deconvolution algorithm and then transforming back to sRGB. I got the following result:

Sharpened picture

So the face has been sharpened significantly using only the data about the F-number and the assumption about the size of the pixel. Banding artifacts are visible in the dark part of the image, this is due to posterization after transforming back to sRGB.

As requested, I'll add some more details on the programs used. I used ImageJ and ImageMagick, I also used Mathematica to calculate the point spread function, but it can also be done within ImageJ. I'll start by explaining how I do deconvolution with ImageJ when I already have the point spread function. To do deconvolution, you need to install a plugin for ImageJ, I used this plugin for this case, but there are also other plugins availble e.g. the DeconvolutionLab plugin.

First, you need to convert to linear colorspace, I used ImageMagick to convert the unsharp image (input.jpg) to linear colorspace using the command:

convert input.jpg -colorspace RGB output.tif

Then with ImageJ, you then open the file output.tif. Then, from the menu options, you select "image" then "color" and then "Spit Channels". Then from the menu select "plugins" and then "parallel iterative deconvolution" and then 2d interative deconvolution".

You then get the deconvolution window, you then select the image and "PSF" means the point spread fucntion, there you select the image file that contains the point spread function. For the method, I choose "WPL" which is based on the Wiener filter, which usually works reasonably well for low noise images. In the options for WPL, check the "normalize PSF", and for the low pass filter change the value to 0.2, by default it is 1, but a lower value is better for low noise images (if you choose it larger, you'll get an image that is less sharp). The the other options, Boundary can be chosen to be reflexive, resizing can be set to "next power of 2", output can be set to 32 bit, precision can be set to double. I chose the number of maximum number of iterations to be 15, and the number of threads is set automatically, you need to check if your computer indeed has the indicated number of threads (in my case it is 8, a quad core processor each with 2 threads).

You then run the program by clicking on "deconvolve". You then get a 32 bit image file as output. Usually, the pixel values are quite similar to what they were in the original picture, but you can have some pixels that exceed the maximum for the original image format. So, in this case we started out with 8 bit images, but in the deconvolved image, you can have gray values that exceed 255 which then causes the entire image to become too dark. This must be fixed by clipping these pixels to 255, which you can do by selecting in the menu "process" and then "Math" and then "Max". The Maximum value will then be used to clip the gray values that exceed that value. Note that this will be done to the image that you last clicked on. You can also see which file is the "current file" by selecting "window" in the menu, you then see a list of all open image and one of them has the check sign in front of it.

Then once you have deconvolved the 3 color components, you can combine them by selecting in the menu "image", then "color" and then "Merge Channels". You then get a composite image that you can convert to 8 bit RGB using the "Stack to RGB" command you find there.

You then save that image, let's call it im.tif. Finally, you must convert this to sRGB, you can do that with ImageMagick using the command:

convert im.tif -set colorspace RGB -colorspace sRGB output.tif

The remaining question is then how to obtain the point spread function. In practice, if you had taken a picture like the one under discussion here, you could simply have taken a picture of a point source, e.g. a star at F/29 and used that as your point spread function. Alternatively, you can look at high contrast boundaries and extract the point spread function from the way the gray values change from one value to another across the boundary. But then you are trying to sharpen the image as best as you can.

In this case the objective was to compile the point spread functions for the color channel based on what you would expect it to be for F/29, deconvolve the image with that and see if the result looks improved good enough that. I used Mathematica to do some calculations and which such an advanced computer algebra program it's quite easy to do all sorts of manipulations including averaging over a wavelength interval and integrating over pixel areas to make the PSF more realistic.

But ImageJ also allows you to create a new image that you can use as the point spread function. If you click on "File" and then "New" you can create a 32 bit image of size, say, 64 by 64 filled with black. You can then program a formula for the gray values by selecting "process", then "Math" and then "Macro". To get the point spread function for this case which involves the Bessel function in here, you can use the fact that it is well described by the first few terms of the series expansion. The MathWorld page I linked to gives you this series expansions, so, J1(x)2/x2 with x = πrs/(λF) can be replaced by a function of the form A + B s2+ C s4. This approximation will become invalid if s is too large. Now, we know that the PSF tends to zero, e.g. at a distance of about 5 or 6 pixels it can be set to zero. Assuming that the the polynomial is still small at these values, you can write the Macro as:

if(d<7) v = A + B  * pow(d,2) +  B * pow(d,4)

You then create 3 images for, say, lambda = 650 nm, 500 nm and 400 nm, or whatever other values you think is appropriate to represent diffraction in the 3 color channels. You can then do the averaging over a few different wavelengths by creating pictures for different lambda and then selecting "process", then "image calculator", and there you select "add". You can then add another image to this result and when you're done adding, you can divide to get the average by selecting "process", then "math" and then "divide".

Count Iblis
  • 3,616
  • 1
  • 13
  • 17
  • That's kinda amazing. Wonder how long it'll be until cameras have this built-in to their automatic correction arsenal and this sort of thing is applied as a matter of course. – mattdm Jul 23 '15 at 03:23
  • 3
    +10 (can't really do +10 though) for adding the math. It is great that you added it. Can you give any citation for this. I want to try this myself. The more detailed math the merrier! – Corrupted MyStack Jul 23 '15 at 03:33
  • That's really quite astonishing. What software did you do this in? This seems like it could be incredibly useful for macro photography. – Whelkaholism Jul 23 '15 at 10:24
  • @mattdm Deconvolution requires quite some computational power, but then some algorithms like Richardson–Lucy deconvolutioncan be more easily implemented in the hardware than others. Also, the camera software that is used to process their raw files on their computers could include deconvolution that is specifically optimized to deal with the actual blur you get due to defocus and diffraction. – Count Iblis Jul 23 '15 at 16:49
  • 1
    @CorruptedMyStack I'll add some more details in the answer. You should be able to do a much better job if you have the raw file. You can look up the sensor size and from that calculate the pixel size. But you can also skip the computations and directly measure the point spread function by taking pictures of some point like object and then just extract the point spread function from that. In case of diffraction, the deconvolution is best done with the raw files, unlike in other cases you now have color fringing that should not be corrected for before deconvolution is carried out. – Count Iblis Jul 23 '15 at 17:05
  • @Whelkaholism I used the ImageJ program and a plugin for this program to do the deconvolution. I calculated the point spread function using Mathematica, but it can also be done using free of charge software. I'll try to explain in more detail in the answer. – Count Iblis Jul 23 '15 at 17:08
  • @CountIblis That looks like some quite impressive software. Can you add to your answer the specific steps you took to get from the photo in the question to that in your answer, including which exact plugin you used? – user Jul 23 '15 at 18:31
  • @CountIblis or Can somebody else format the equation in latex by editing the answer? – Corrupted MyStack Jul 24 '15 at 04:40
  • eyes glaze over Thanks, but my maths is nowhere near good enough to follow that :) If I had more time I'd be tempted to try to write something that you just plug the variables and a JPEG into but sadly that's not going to happen fr a while. – Whelkaholism Jul 24 '15 at 12:59
16

Because of diffraction. f/29 is way too much for you to expect a sharp image. Try shooting the same thing at f/8 and you'll see the difference.

K. Minkov
  • 2,055
  • 1
  • 12
  • 21