6

For digital sensors and in terms of imaging medium, is the minimum CoC equal to the size of 1 sensor pixel or 2? And why? For example purposes using a 35mm Full Frame digital sensor where the pixel size is 0.00639mm (and rounding up), is the minimum CoC 0.007mm or 0.014mm?

In What exactly determines depth of field? jrista says

Digital sensors do have a fixed minimum size for CoC, as the size of a single sensel is as small as any single point of light can get (in a Bayer sensor, the size of a quartet of sensels is actually the smallest resolution.)

However, later on jrista says

In the average case, one can assume that CoC is always the minimum achievable with a digital sensor, which these days rolls in at an average of 0.021mm, although a realistic range covering APS-C, APS-H, and Full Frame sensors covers anywhere from 0.015mm - 0.029mm

Using the number 0.015mm for minimum CoC on digital full frame sensors is about 2 sensor pixels in size instead of 1, does this not match up to what was said originally? Or does it, by implying (but not explicitly stating) the use of a bayer sensor which is said above to have a minimum CoC equal to a quartet of pixels, and that would be 2 pixels wide and ~0.0015mm?

And in Why do some people say to use 0.007 mm (approximate pixel size) for the CoC on a Canon 5DM2? Michael Clark says

With digital sensors, the size of the pixel determines the size at which the circle of confusion (CoC) becomes significant when viewing at 100% crops. Any blur circle smaller than the pixel pitch will be recorded as a single pixel. Only when the blur circle becomes larger than an individual pixel will it be recorded by two adjacent pixels.

However, this webpage that is very interesting says

The smallest size of the image CoC may be limited by other factors. For digital sensors, the CoC cannot be smaller than the physical size of two pixels (image elements). Obviously nothing smaller can be resolved. Typical pixel sizes for high resolution digital cameras are in the range of .006 to .012mm. These sizes yield resolution numbers of 83 lp/mm and 43 lp/mm respectively. These equate to CoC values of .012 and .023mm. A similar effect is unavoidable with film emulsions since the grain size determines the size of an individual image element. The typical “graininess” of film varies from .004 to .018mm.

The idea of using the size of 1 sensor pixel as explained by Michael Clark makes good sense to me, so I'm confused by the other website's idea above that it's 2 pixels, it's seems like everything else in that article is accurate and well said, it's hard to believe he's incorrect about the minimum CoC size.

j-g-faustus said and seems to imply diffraction/airy disk size is related, is he correct in saying 'the point where you can no longer tell two airy disks apart doesn't happen until the airy disk diameter reaches 2 pixels" -- I thought the airy disk became a problem when it was larger than 1 pixel or larger than the image's CoC???

@DavyCrockett I think using two pixels makes sense, by analogy with the diffraction limit - the point where you can no longer tell two airy disks apart doesn't happen until the airy disk diameter reaches two pixels. Similarly, a CoC of more than 1 pixel will bleed over and reduce the contrast between a pixel and its neighbour, but actual Confusion, the point where you can't tell two pixels apart, doesn't happen until CoC reaches two pixels. That would be my best guess, anyway

DavyCrockett
  • 449
  • 2
  • 11
  • On airy disks and diffraction, see diagrams and explanation at Cambridge in Color: "As a result of the sensor's anti-aliasing filter (and the Rayleigh criterion above), an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution. However, diffraction will likely have a visual impact prior to reaching this diameter." The basic idea is that a pixel is a solid block of color, there's no resolving taking place before you have at least two pixels with different values. – j-g-faustus Jun 02 '13 at 18:16
  • As for whether it makes sense to treat CoC the same way - I think so, but I'm curious too :) – j-g-faustus Jun 02 '13 at 18:17

2 Answers2

2

Imagine if a blur circle (or airy disc) is striking a sensor with the middle of the circle centered on the middle of a pixel.

  • If the circle is one pixel or less in width, all of it will only strike one pixel.
  • If the circle is from two to three pixels in width it will strike all of one pixel and parts of the eight pixels that surround that one pixel for a total of 9 pixels in a 3X3 square.
  • If the circle is five pixels wide it will strike all or part of 25 pixels (5X5 square).
  • At seven pixels wide, the blur circle will cover parts of 45 pixels (a 7X7 square minus the four corner pixels because the circle will not quite reach the four corner pixels). 29 of the pixels will be completely covered and the other 16 will be partially covered.The luminance values (derived from this single blur circle, not taking into account all of the other point sources of light and their resulting blur circles falling on the sensor at the same time) of the 25 would be higher than the varying luminance values of the other 16. Among the 25 pixels fully covered by the blur circle, those nearer the center would have higher luminance values than those near the edge.

Now imagine a blur circle centered on the intersection of a 2X2 square of four pixels.

  • If the circle is one pixel or less in width, it will strike parts of the four pixels that form the 2X2 square.
  • Even if the circle is up to two pixels in width, it will only strike parts of the same four squares.
  • If the circle is over two and up to four pixels wide, it will strike all or part of sixteen pixels (a 4X4 square). Four will be fully covered, eight will be mostly covered, and the other 4 will be less than one half covered.
  • At five pixels wide, the blur circle is striking all or parts of 32 pixels (a 6X6 square minus the four corner pixels).

I think where a lot of the confusion comes in is a misunderstanding of how demosaicing algorithms do interpolation to produce an R, G, & B value for each pixel in a Bayer type sensor. If the incorrect assumption is made that the lowest unit of color resolution using a Bayer type sensor is a 2X2 pixel square, then the CoC works out to two pixels in width if you also make the incorrect assumption that all blur circles are ideally centered to cover the minimum number of pixels for their respective sizes. It also depends on if you define resolution as the smallest unit a point light source can be represented as (one pixel) or the smallest unit that can produce contrast (two pixels). When resolution is defined in terms of line pairs as in the webpage you cited, the second definition regarding contrast applies.

Michael C
  • 175,039
  • 10
  • 209
  • 561
  • That visualization of the blur circles really helped me see things more clearly, but the last paragraph goes over my head and I'm not exactly solid on a few things. It leaves me thinking - Why is it an incorrect assumption that 'the lowest unit of color resolution using a Bayer type sensor is a 2X2 pixel square'? What should one assume in regards to the centering/lack thereof of blur circles on pixels? How does defining resolution between the a point light source (1pixel) and line pairs (2pixels) effect the end result of the photo and therefore the print? should these be separate discussions? – DavyCrockett Jun 02 '13 at 21:13
  • The reason it is incorrect to assume the lowest unit of color resolution using a Bayer type sensor is a 2X2 pixel square is because that is not how cameras with Bayer sensors work. If it were, it would take a 6000X4000 (24MP) sensor to produce a 3000X2000 (6MP) image. Instead, the demosaicing algorithms (the software that interprets the RAW file) use some very complex mathematical equations to interpolate a Red, Green, and Blue value for each pixel. – Michael C Jun 03 '13 at 02:55
  • The centering, or lack thereof, of the blur circles just points out that the center of a blur circle may be centered anywhere from the middle of a pixel to the corner of a pixel which is also the intersection of that pixel with adjacent pixels. It also demonstrates that a blur circle's size is not the only criteria that determines how many pixels will register some of its light. Where it is centered with regard to the sensor grid positioning also comes into play. – Michael C Jun 03 '13 at 03:12
  • How does defining resolution between the a point light source (1pixel) and line pairs (2pixels) effect the end result of the photo and therefore the print? None whatsoever. Regardless of how you define resolution, the same image taken in the same conditions with the same hardware and processed with the same settings in the same software will be the same. When you want to figure DoF you don't start with sensor/pixel dimensions - you start with intended viewing size/distance and visual acuity of the viewer, then factor in the magnification of the recording medium needed to get that viewing size. – Michael C Jun 03 '13 at 03:18
  • 1
    The reason you use .03 (assuming the 8X10 viewing size at 10 inches by a person with 20/20 vision) as the CoC for FF cameras and .02 for APS-C cameras has nothing to do with the pixel pitch of the respective sensors. It is determined by the magnification factor needed to produce an 8X10 image from a specific sized sensor. If you are going to view the image at 100% on your monitor, then the viewing size is determined by the pitch of your monitor. The acceptable CoC is then determined by that combined with your viewing distance, and your visual acuity. – Michael C Jun 03 '13 at 03:24
  • Okay, thank you very much for that. It'd be great to finish it off with a practical example; Assuming 8x10 viewing size at 10 inch by a person with 20/20 vision. If you wanted to max out the print size to where you still maintain the subject in the same sharpness (5.7LP?) at the same viewing distance of 10 inches (using FF sensor) as you did in the 8X10, what CoC would you use and what print size do you end up with? I understand a maxed out sized print won't have the same DOF as the print at 8x10 at the same viewing distance – DavyCrockett Jun 03 '13 at 03:53
  • An in magnification will reduce the sharpness, so you could not "max out" the print size and still maintain the subject in the same sharpness at any size larger than 8X10 unless you shoot a different image with a narrower aperture to allow the same DoF with the difference in display sizes taken into account. At the print size it is all linear: a 16X20 print would need twice the DoF as the 8X10 print. That requires a CoC one half the size of that used of the 8X10. So to display a 16X20 at 10 inches viewed by a person with 20/20 vision, you would need to use .015 as your CoC to get the same DoF. – Michael C Jun 03 '13 at 04:06
  • Please note: Doubling the the DoF will do nothing to increase the absolute resolution of items at the point-of-focus distance from the camera. So the sharpness of the items in sharpest focus would still be halved by doubling the print size (because you are magnifying each pixel twice as much). If increasing the DoF requires an aperture greater than the DLA for the camera used, it could reduce absolute resolution even more. The percieved sharpness, however, could very well still be acceptable. – Michael C Jun 03 '13 at 04:10
  • Awesome, and yes I did mean use a different aperture and take a different photo, with the emphasis on the level of perceived sharpness in the sharpest areas of the two photos, an 8x10 and the theoretical max print size to do so. – DavyCrockett Jun 03 '13 at 05:00
  • "So the sharpness of the items in sharpest focus would still be halved by doubling the print size (because you are magnifying each pixel twice as much)." I was thinking perceived sharpness was based on how many line pairs the eye is capable of resolving at a viewing distance, so say for 5.7LP at 10 inches (20/20 vision), then even if you were to double the magnification of the pixels the sharpest part of the image still contained 5.7 LP resolution, so the perceived sharpness of the sharpest area would be the same? – DavyCrockett Jun 03 '13 at 05:03
  • It's Line Pairs per inch. If you double the size of a print of a test chart, you have half as many line pairs per inch at the point where it goes from line pairs to grey. – Michael C Jun 03 '13 at 06:14
  • Hmm okay I think I'm starting to get this now, I must have been really confused. For instance you take photo of a 12" ruler and focus on the 6" tick mark using 0.03mm as CoC using 'F' focal length at 'S' subject distance, lets also assume the tick mark measures 0.007mm on the image sensor. If you were to take another photo, keeping 'F' and 'S' constant, with a larger F-stop, using CoC 0.007mm, then the tick mark in the two images is in exactly the same sharpness correct? – DavyCrockett Jun 03 '13 at 17:31
  • (cont'd) Then if you print the first image at 8x10 and the second image at 16x20; The tick in the first print will be twice as sharp (LP per inch) as the tick in the second print? And finally to fully understand; since the tick mark was captured with 0.007mm blur circles, in the second print the tick will be just as sharp (LP per inch) as certain other elements in the first print (not defined in the example but assuming they exist) that were captured with blur circles of 0.015mm? – DavyCrockett Jun 03 '13 at 17:45
  • If you are focused on the tick mark it really doesn't matter what you use for the CoC. CoC is used to determine the depth of field for a given aperture at a given focal length. How sharp the tick mark you have focused on is determined by the resolution limit of the lens and image sensor combination. Do you realize that if the tick mark measures .007 on the FF image sensor it will only measure .056mm on the print? That is barely perceptible as a dot at 10 inches by a person with 20/20 vision. – Michael C Jun 03 '13 at 22:17
1

You're thinking of CoC all wrong. It has nothing to do with the size of a pixel - it has to do with the features you can see on a print. Realize that the standards for DOF were set long before there were pixels. The question you're trying to answer is, for a reasonably sized print viewed at a reasonable distance by a person with good vision, what will look sharp? The CoC is an attempt to distill that down to a single number.

A picture viewed at 10x on your monitor will look blurry no matter how sharp it is, as will a print held up to your nose.

What matters is not so much the size of the pixels, but the size of the sensor. If a sensor is half the size then the CoC must be half the size so that the details will remain the same proportion of the final image.

If your sensor does not have the resolution to cover the CoC, then you must accept that nothing in your picture will be sharp, not even when it's in perfect focus.

Mark Ransom
  • 1,495
  • 10
  • 12