2

I have Nikon D3500 with kit lens 18-55 VR and some other lenses af-p 10-20 among them.

As I renewed my interest for photography some time ago, I have some images at the LR scrutiny that were taken more than six months ago, when I was still refreshing my knowledge and making many mistakes.

Yet, I cannot realize how could I make some images look not so sharp when viewed on 100%.

I mean, image at 18mm focal, f/14 has hyperfocal distance slightly over 1m and with my habit to focus at least somewhat longer than hyperfocal, how could I miss it?

I know, depth of field is not a measure of ideal sharpness, yet, I don't Like many details on 100%.

Am I too demanding or there could be some problem with camera?

I'm simply wondering how to not take photo sharp enough with these settings. And shutter speed is 1/160, that shouldn't create a blur even with a bit shaky hands.

Am I missing something?!

(https://flic.kr/p/2igQbst)

I have added one of typical pictures where buildings in the back lose definition.

DrazenC
  • 89
  • 8
  • Related, similar question: How to take sharper photos – scottbb Jan 20 '20 at 02:44
  • "Am I too demanding or there could be some problem with camera?" - Yes. – xiota Jan 20 '20 at 04:42
  • Are you suggesting that 1/160 still can allow motion blur? I don't have enough "working hours" to practically verify such possibilities, but what I do know is that these two lenses have vibration reduction, while the one that I use, 35mm/1.8 hasn't. Through the practice I reached the knowledge that 1/60 may or may not produce blur, while 1/80 will mostly not. Of course, I could imagine there could be circumstances where higher shutter speed still allows blur. – DrazenC Jan 20 '20 at 11:41

4 Answers4

2

f/14 can get you in trouble with the diffraction police on a 24MP APS-C sensor.

And hyperfocal distance is a compromise that gets a specified range of distances sharp enough, not perfect.

rackandboneman
  • 7,212
  • 19
  • 32
  • Could it be diffraction at f/14 already? In the meantime I learned that f/8 or f/11 is better for such short focal length as DOF is large anyhow. My problem seems to appear in photos where I'm trying to create as large DOF as possible, landscapes, cityscapes. When I focus at a single distinct subject the results are much better. – DrazenC Jan 20 '20 at 11:47
  • 1
    Also, hyperfocal distance tables assume that focusing scales on lenses are accurate. Often with SLR lenses, they ARE NOT. – rackandboneman Jan 20 '20 at 11:50
  • My lenses don't have scales, the only think of can do is to estimate lengths myself, with some distances it is not a trouble. Currently I noticed advise that it's better to focus double the distance of the closest desired sharp objects than to hyperfocal. – DrazenC Jan 20 '20 at 14:22
  • Diffraction sets in at different points on different lenses. Try f/5.6. Focus on something midframe. With 18mm, you should be able to easily capture infinity. – xiota Jan 20 '20 at 14:29
  • 1
    Might be me messing around with adapted lenses too much... I think distance scales on anything but rangefinder lenses are pretty much ornamental skeuemorphs :) – rackandboneman Jan 20 '20 at 15:28
  • @rackandboneman +1 for use of "skeuomorphs". I so rarely get to use that word. – scottbb Jan 20 '20 at 19:24
  • @DrazenC In most cases at non-macro distances, double the distance of the closest desired objects to be rendered acceptably sharp (not perfectly sharp) is the hyperfocal distance. But keep in mind that since DoF changes as enlargement changes, so does the hyperfocal distance. The correct hyperfocal distance when assuming an 8x10 or 8x12 inch display size is not the correct hyperfocal distance for a 24x36 inch print! Most DoF calculators assume standard viewing conditions: 8x10 viewed from 10 inches by a person with 20/20 vision. A 24MP image at 100% on a 24" monitor is more like 60x40! – Michael C Jan 21 '20 at 03:10
2

At least some of the visible light spectrum is diffraction limited on a 24MP APS-C sensor at f/5.6, and all of the visible spectrum is diffraction limited beyond f/8. Depending on the lens, your results could be slightly worse than that.

However, looking at the image you linked to on flickr I would say that your main issue is atmospheric over distance... things like haze, moisture in the air, and heat turbulence. It's what's causing the mountains to obviously loose color/contrast/clarity as they recede into the distance, and it's affecting everything else as well.

Steven Kersting
  • 17,087
  • 1
  • 11
  • 33
  • Thanks for looking at it. I actually don't have anything against haze in background as this is exact atmosphere of the reality on these places. Sometimes I do dehaze if the entire image is hazy and look blurry, so noone would understand that things are really blurry here during wet southern wind days. What I cannot resolve easily, and happens on all buildings on distance is sort of glow when strong hard sun hits white or very bright facades. Facades receive some soft glow that sometims look as blurryness. Sometimes I do local clarity increase, but didn'r find ideal solution for this. – DrazenC Jan 20 '20 at 20:18
  • Have you ever looked at a street light in the fog? Notice the way it glows, seems larger, and more out of focus? That's what's happening to your white buildings reflecting strong light toward you through a light haze/moisture... it's just much less diffracted than a dense fog would cause. – Steven Kersting Jan 20 '20 at 21:29
1

I think the picture looks fine.

It is rarely needed to get pictures sharper than this. Viewing at 100% would be equivalent to printing in a size more than a meter long. Hardly anyone does that anymore, and if someone does, we don't put our nose on it and complain one of the leaves could be sharper.

Expecting perfectly sharp results at 100% will always leave you dissapointed. It is simply not possible because 75% of what you are seeing is made up and does not really exist. Any camera that can take color pictures has something like a Bayer filter. This filter filters out everything except one primary color per pixel (this statement is somewhat simplified, but the filter does block most of the light). Your camera has algorithms that try to recreate this information, but there is a lot of guessing involved. There will always be imperfections, called artifacts. Especially around sharp edges this leaves unsharpness.

It is often better to focus on other areas than optimising sharpness, but if you want to improve the sharpness, you could try the following.

1) Shoot at your lens's optimum aperture, this is usually around 4.5. You can often find this on the internet. At F/14 diffraction starts to play a significant role.

2) Check your processing sharpness setting. Pictures are made unsharp in most profiles to make them look more smooth.

3) Use a faster shutterspeed. The 1/F rule comes from the old days when people said that a 600x400 pixel television was very sharp. If you want maximum sharpness at a 100% on a modern camera you need to go much faster.

4) Buy better equipment. If you look at 100% everything matters. I would not recommend this though. My photo's started improving many times faster when I realized that learning about the artistic side photography was a much better way to spent my time than reading reviews and thinking about new stuff to buy. You will always find limitations in your equipment if you look for it, but they are rarely a big issue in normal use.

Orbit
  • 1,530
  • 11
  • 24
  • Thanks' these comments are very useful as I'm trying to establish a feeling on what is generally acceptable, not to go into so many details to always be unsatisfied (I already checked some numbers of images taken by fancy equipment that also have some issues, so it is pretty clear to me that ideal solution doesn't exist and should not be seeked just of mere punctuality on things noone will ever notice). The reasons to look at 100% at all is to check whether focus is right and to handle noise - and I also need to learn on how much very mild traces of noise are acceptable at 100%. – DrazenC Jan 20 '20 at 20:11
  • What you say about colour profile sharpening is what I don't fully know about. I thought that in-profile sharpening does the same as LR sharpening. Am I wrong? I get used to taking pics in Flat or Neutral profile, but start with Flat during LR post-processing. This gives most of options for selecting alternatives in processes, however I do know that Flat profile has zero sharpening. Can I restore missing in-colour sharpening via LR? I noticed that my images started with Flat don't have halo even at 80% sharpening, but they do receive artifacts if edges sharpened are basically a bit blurry. – DrazenC Jan 20 '20 at 20:12
  • "... except one color per pixel..." is massively incorrect. – Michael C Jan 21 '20 at 03:02
  • @MichaelC: Do you mean I should have said primary color instead of color? – Orbit Jan 21 '20 at 09:07
  • @DrazenC: It is not really only the color profile, it is what Nikon calls the Picture Control. You can find some information here: https://imaging.nikon.com/lineup/microsite/picturecontrol/tips/basic_guide.htm and Here: https://onlinemanual.nikonimglib.com/d3500/en/10_psam_modes_05.html – Orbit Jan 21 '20 at 11:15
  • I ađ pretty aware on what picture control is, what I don't know is how sharpening within picture control is applied: is it cumulative with Lightroom sharpening and how it affects it. As mentioned, I either shoot in Flat profile or start post with Flat profile, so could it be possible that mY pictures are always less sharp at the beginning, because I know that sharpening in Flat profile is set by Nikon to zero. – DrazenC Jan 22 '20 at 17:02
  • @Orbit No, I mean much more than a single color passes through each of the filters in a Bayer mask in the same way that when we use a red filter with B&W film non-red objects do not appear to be totally black, they just appear to be darker than similarly bright red objects. Not to mention that the "red" filters in Bayer masks are actually yellow at about 590µ, rather than "red: at about 640µ and that there is a LOT of overlap between what passes through the "green" and "red" filters. – Michael C Jan 23 '20 at 07:53
  • @DrazenC: sharpening tries to regain some of the sharpness lost because of the filters, so it can just as well be done in post, having it at 0 seems fine to me. – Orbit Jan 25 '20 at 12:54
  • @MichaelC: I did not include that because I don't think the exact response spectrum of the filter is relevant when making the point that most of the color information is lost on a pixel level. – Orbit Jan 25 '20 at 12:57
  • @Orbit There's only a single brightness level at the "pixel" (more properly called sensel or photosite - pixels are output units) level. No color information at all. But the light energy collected by each sensel is from a broad range of wavelengths, just like the light energy that passes through a color filter is from a broad range of colors, not a narrow one. The idea that each color filter rejects 2/3 of the light falling on it is nowhere near the case. – Michael C Jan 25 '20 at 18:09
  • @MichaelC: It is true that a broad range of wavelengths can pass the filter, but at most of the wavelengths only a small percentage can pass. So on average 2/3 being rejected seems about right, or even conservative. You have posted several response curves for sensors at other questions. They show that that at their optimum wavelength only about 60 of the photons get registered, at other wavelengths it drops to well below 10%. The loss may be due to sensor efficiency or due to the filters, that cannot be known exactly, but the graphs give a pretty good idea. – Orbit Jan 26 '20 at 11:45
  • @Orbit the offending part of your answer that started this discussion is "... everything except one color per pixel." Any such sensor with such filters that only allowed three discrete single wavelengths to pass would be far less efficient than 33%. It would be more like 3.3% or even 0.33%. – Michael C Jan 26 '20 at 20:50
  • And again, there are many examples of recent color imaging sensors that are greater than 50% efficient with their Bayer masks in place. That's significantly more efficient than a filter which allows "... on average about 2/3 seems about right." And even monochrome sensors that are most efficient in the middle of the visible light spectrum have significant falloff at the extremes of the visible spectrum, so that is already there before one puts a Bayer mask on a sensor. – Michael C Jan 26 '20 at 21:32
  • @MichaelC: Please show me some examples of sensors with efficiencies near or above 50% with Bayer filters in place, because I really don't see how that could be physically possible. – Orbit Jan 28 '20 at 21:13
  • @Orbit try on two pairs of sunglasses with the same tint, except one is twice as dense as the other. Do you see how one lets more light through than the other? CFAs are no different. Some are denser, some are less dense. As available processing power in battery power limited portable devices such as cameras has improved, camera makers are using less dense CFAs than in the past because increased processing power can still render accurate color with less differentiation between the results from each color filter. – Michael C Jan 29 '20 at 00:34
  • @Orbit If CFAs had hard lines in which any wavelength allowed through by one color was totally rejected by the other two filters, then you would be correct. But that's not the way color filters work. It's not the way our retinal cones work, either. If each color filter only allowed totally discrete wavelengths, compared to the other two, you would not be able to reproduce color in the way that our eye/brain system creates perception of color that doesn't actually exist in the properties of electromagnetic radiation. – Michael C Jan 29 '20 at 00:37
  • @Orbit https://www.flir.com/globalassets/iis/guidebooks/2019-machine-vision-emva1288-color-sensor-review-en.pdf.pdf – Michael C Jan 29 '20 at 00:45
  • @MichaelC: The efficiency's in that folder are for one channel only, you can find the full specifications here: https://flir.app.boxcn.net/s/7e96a2qyc97xqkzud17o6g1eux7hznj3?page=1 – Orbit Jan 29 '20 at 22:28
  • @Orbit And why do you think there's no line that shows a combined QE % for the sensor with the CFA in place? With all of the detailed information that is offered, one would think that if it were that important it would be included? Maybe it is because sensors are selected for a specific task based on what kind of light they are expected to measure? And that sensors for photographic cameras intended for artistic and documentary photography that render scenes similar to how they look to the human eye/brain will be designed to work most efficiently with the same type of light illuminating nature? – Michael C Jan 29 '20 at 23:45
  • ... which is not constant intensity across all wavelengths of the visible spectrum, but is stronger at certain peaks in the middle part of the spectrum? – Michael C Jan 29 '20 at 23:49
  • @MichaelC: What point are you trying to make here? You started about the system QE. You keep keep saying that it is well above 50%, and now that it turns out that you were wrong and it is only around 25% you start saying that it is not important. – Orbit Jan 30 '20 at 20:33
  • No, I'm saying that to answer the question "How much light is lost to the CFA" one must take into account the nature of the light falling on the CFA. If the goal is to produce photographs of scenes that are an accurate representation of how human eyes see the world they inhabit, then assuming that the light falling on such a sensor is every wavelength of visible light at equal brightness and distribution is woefully misinformed. If the light one is concerned with is strongest in the 500-570nm wavelengths, then the percentage of photons that produce a charge will be higher. – Michael C Jan 31 '20 at 03:42
  • @MichaelC: If you're only interested in light between 500 and 570 nm, you can only make pictures of green things. Any normal person will look for a camera that can represent all 7 colors properly. To do that it needs to be able to detect almost every wavelength in the visible spectrum. Only the far ends are less important because our eyes are not very sensitive there anyway. Excluding these from the efficiency calculations hardly influences the result. Sensors perform quite poorly in the blue region, the result is that blue gives much more high ISO noise. – Orbit Feb 01 '20 at 16:04
  • There's a substantial difference between light "... with is strongest in the 500-570nm wavelengths" and "only interested in light between 500 and 570 nm." Are you familiar with the light used on optical test benches? There's also a difference between "... needs to be able to detect almost every wavelength in the visible spectrum" and "needs to respond equally to every wavelength in the visible spectrum." Our eyes do not perform very well in the blue region, and thus we do not need our cameras to, either. – Michael C Feb 02 '20 at 01:07
0

The pictures you posted look like you're getting motion blur or bad processing. If you look at things zoomed in you can see the bad edges. The little halo of blur around the edges is what happens around moving objects in low res video. It definitely shouldn't be there. The bush zoomed in looks bad. It looks super blocky and low res. These pictures look like they have a bunch of bad processing and are way too compressed. Zoomed in:

enter image description here

enter image description here

moot
  • 629
  • 4
  • 7
  • Appreciate every feedback, while to have it the most useful, understanding should be reached. The other people gave pretty good feedback on overall sharpness. Have you checked the image via large screen or via smartphone? This image was sent to Flickr in full resolution, so no any other compression but mere transition to JPEG, while I left quality slider on 100%. At 100% I am able to see some very small traces of halo, which are present from 35% sharpening to 78% that I set. Noise reduction is at 23%. – DrazenC Jan 22 '20 at 14:20
  • Normally I do not focus on having bushes too sharp, as they tend to look unnatural, they are are rarely completely still, and this image is taken at 1/60 with VR on. When checking sharpness I am mostly focused on buildings. As some other say, light glow is visible on light objects. My main concern is whether I can make it any sharper than this, having in mind that my equipment is not top-class, or I'm doing something wrong. – DrazenC Jan 22 '20 at 14:20
  • Oh, now I noticed that you used Imgur as a host. Artifacts around edges of bushing don't exist neither in original file nor on Flickr zoomed to maximum (which goes to one to one on Flickr), they might be caused by Imgur compression or by some system interpolation applied when image is zoomed by PC or smartphone system zoom command. – DrazenC Jan 22 '20 at 17:37
  • @DrazenC Yeah, I downloaded the image from your link which is twice the dimensions and has those bad edges? When i work off screenshot of your link, the double edge issues aren't there. I just posted through here. The post here has been touched but it looks the same on a 74% jpg - pixels move a little. – moot Jan 22 '20 at 18:40
  • @DrazenC You're talking about using sharpening and noise reduction. These are interfering with your sharpness issues. You shouldn't use sharpening. It's not doing what people think. Sharpening distorts everything. Focus gets distorted - the natural blurring and sharpening that occurs from distance is distorted. I know supposed experts use it but they don't understand what's happening. Also don't use noise reduction. It's like sharpening, it distorts things, it's creating false graphics, There are better ways to get that fix without using computer generated imagery – moot Jan 22 '20 at 18:52
  • Image itself is computer-generated as is the noise and other issues, bit I really don't have incentive to go into such broad discussion, I'm interested in experienced feedback on typical specific problem, while I fail to understand your feedback, the artifacts don't exist either on original image or in Flickr, I checked lot of highly liked images on Flickr, taken by powerful gear, yet I see no substantial difference on relation to the provided example. Can you post some image you deem perfect, at least as a reference? – DrazenC Jan 23 '20 at 22:52
  • Just zoom in to see the pictures. I don't know Flikr and didn't know the zoomed in pic wasn't yours. I see you turned off downloads or something? I process pictures professionally. Here's as simply as I can put it, all the sharpening you're doing in Lightroom, or wherever, is wrong. I'm not sure if you're really still asking or have the forum spins but if you really want to know, post your original pictures with zero alterations – moot Jan 24 '20 at 03:53
  • No offense, but I'm interested in experienced feedback, not in learning unique way how to post-process which is known by you only. If you have some specific system which is much better than others, you may make business in learning courses; what I need is simple feedback. – DrazenC Jan 26 '20 at 14:20
  • @DrazenC I have over 15 years experience and I'm telling you to use the simplest, quickest way to solve your issue that's commonly known. Stop trolling here – moot Jan 26 '20 at 16:13
  • "Here" is the thread I opened to seek for well-intended help, so it is just what I wanted to urge you in polite manner, to stop trolling and go elsewhere, to leave this thread for a more constructive discussion. – DrazenC Jan 27 '20 at 19:22
  • @DrazenC the problem for you is everything is here, me giving a good answer, you trying to tell me why my answer is wrong and about all the corrections you do, me trying to be nice and answer, you trying to explain why I'm wrong again and i'm inexperienced, it's all there. You trying to say I'm the one trolling and telling me to leave is the best – moot Jan 27 '20 at 20:12