12

It seems like the Foveon sensor should be able to produce better images, because it's not dependent on the separate red, green, and blue pixels as exist on most digital cameras. However, cameras equipped with Foveon sensors are pretty much nonexistant. Why?

(Side note: This question was inspired by Bayer Filter answer where the Bayer filter potentially caused problems...)

Billy ONeal
  • 3,350
  • 5
  • 29
  • 47
  • Some technical shortcomings of the previous generation of the Sigma Foveon sensor: http://www.pentaxforums.com/forums/pentax-news-rumors/106349-foveon-x3-sensor-4.html#post1479064 – eruditass May 23 '11 at 20:59

7 Answers7

10

What happened is that Sigma bought Foveon and put a lot of pressure on them to produce a sensor that is actually capable of competing with standard DSLR sensors. Now that Sigma is building the whole camera and sensor, there is a lot more focus on producing a compelling end-product.

Last year Sigma announced the SD1 which uses an APS-C (1.5X crop) sensor with 15 million photosites. They way they count Sigma calls it a 46 megapixels sensor. They have not released many details to members of the press (me at least) but is expected to be available by this summer.

There are still several Sigma cameras (DP1x, DP2s, SD15) in production which use the 1.7X Foveon sensor with 4.5 million photosites (aka 14 megapixels).

Itai
  • 102,570
  • 12
  • 191
  • 423
  • 6
    It should be noted that the use of megapixels here cannot be used in direct comparison to megapixels of bayer-type sensors. While there may be 46 million distinct photo-sensitive elements in the sensor, the image produced is a 15 megapixel image. The benefits of Foveon are lower color moire and better color definition at each image pixel. – jrista Apr 09 '11 at 19:28
  • 5
    It should be noted that bayer-type sensors also have no real relation from the MP ratings they use to the final output image, because MP gives you a photosite count, three of which are required for any one output pixel. In addition any bayer sensor may have a different strength of AA filter which further impairs image clarity, while still producing the same pixel count in output. Foveon sensors do not use AA filters. – Kendall Helmstetter Gelner Apr 10 '11 at 06:58
  • @Kendall: Bayer sensors would be most accurately described as having XYmp pixel "intersections". Bayer sensors and their image processors produce images by interpolating all the neighboring sensor photosites at each intersection to produce an RGB image pixel. That means four (not three) bayer photosites are interpolated to produce a single RGB pixel. In a 15mp bayer sensor, there are indeed 15mp "RGB pixel intersections", due to the way interpolation is performed. Just multiply the width and height of bayer image sizes to see how real bayer MP ratings are. – jrista Apr 11 '11 at 05:59
  • 1
    As for AA filters, it depends on the filter whether it impairs image clarity or not. The purpose of the filter (which I believe are better described as low-pass filters) is to filter out spatial frequencies below the spatial resolution of the sensor. When a sensor does try to resolve spatial frequencies below its "nyquist limit", the resulting artifacts have a far greater detrimental effect on the image than anything else. The low-pass filter, when designed properly, will only filter out frequencies that can't be resolved to start with...thus, they don't "further" impair anything. – jrista Apr 11 '11 at 06:03
  • Some DSLR's have low pass filters that are too strong. In the general case, however (Canon and Nikon), they seem to be just right (which one would expect, after more than a decade of manufacturing and using bayer sensors.) The current generation of CMOS bayer sensors seem to properly resolve or out-resolve all but the absolute best lenses, so any complaints about low-pass filters only apply to fringe cases (or in the case where the filter is improperly designed and too strong.) – jrista Apr 11 '11 at 06:08
  • @jrista, Kendall: Foveon and Sigma point out that a direct MP comparison is pointless. Of course any serious photographer would know that any MP number that high is useless anyway in most everyday use, whether it's 15 or 45 :) – jwenting Apr 11 '11 at 06:32
  • @jwenting: I agree, direct mp comparisons are useless. However, whether more MP is worthless in general is a matter for another debate. If you like to print large, more MP is important. – jrista Apr 11 '11 at 19:55
  • I also said direct comparison was not right (because detail captured by a bayer sensor varies by color in scene and Foveon data captured is constant regardless of color), and gave a rough estimate for a 46MP Foveon sensor having the same detail as a 30MP bayer sensor. That is true of current cameras, the 15MP Foveon sensor holds the same level of detail as a 10-12MP bayer imager. Thus you can print a Foveon 15MP image just as large as a 12MP bayer image and detail will appear identical. In fact someone did that with a 14MP bayer camera: http://www.whisperingcat.co.uk/scans/sd14vs14nx.htm – Kendall Helmstetter Gelner Apr 11 '11 at 20:02
  • Pricing and availability have been announced – Evan Krall May 23 '11 at 08:33
7

It comes down to this: at least for most people, spatial resolution (especially in green range of colors) is much more important than color resolution, especially in the reds and blues. The color response curve I included in a previous answer gives at least some notion of the reason for this.

This is particularly relevant when the vast majority of pictures stored/displayed electronically are in JPEG or MPEG formats. These formats support down-sampling the chroma channels to half resolution anyway -- and (especially in the case of MPEG) that's how most pictures are stored. As such, converting data from a Foveon sensor to JPEG or MPEG format typically throws away quite a bit of the extra information you collected.

Though the benefit isn't necessarily huge, some Bayer-sensor cameras (e.g., the high-end Leaf/Phase One's) support sensor-shifting to take a series of four pictures (of a fixed subject) with the sensor shifted to different positions, so each pixel in the final picture has full color information (and still has twice as many bits for green as for red or blue, so it still fits reasonably well with normal vision).

Jerry Coffin
  • 19,310
  • 1
  • 55
  • 87
  • Early Sigma cameras used JPEG compression settings (subsampling) that didn't show their sensor to the best advantage, but they fixed this. I wish I could remember where I had seen a quite graphic demonstration of the problem. – Mark Ransom Apr 09 '11 at 23:45
  • Note that the phase-shifting approach is really only practical for still subjects. There is a lot of value in gathering all data at once. – Kendall Helmstetter Gelner Apr 10 '11 at 06:29
  • @Kendall: yes, the sensor shifting approach has only limited application. It's worth noting, however, that their current top of the line has an 80 megapixel sensor, so resolution is pretty decent even without sensor shifting. – Jerry Coffin Apr 10 '11 at 06:58
  • 1
    I don't really think it's relevant to compare a medium format body in any with a 35mm body, they would be used in wholly different ways anyway... I just wanted to note that while the sensor shifting is one way to potentially address the issue even for smaller cameras, that it has real drawbacks. – Kendall Helmstetter Gelner Apr 10 '11 at 07:43
  • 1
    Also of note is that relying heavily on the observed theory that green spatial resolution is more important than blue/red resolution, leads to the generation of images that appear sharper but are less accurate. There is a tradeoff in any kind of compression of data, and throwing away 2/3 of the visible wavelengths for any given spatial location in an output image is most definitely a form of pre-image-compression not even the use of RAW formats can work around. – Kendall Helmstetter Gelner Apr 10 '11 at 07:47
  • 4
    @Kendall: but calling it "2/3rds" is a little deceptive. Clearly, we're not recording all of the electromagnetic spectrum no matter what. So, focusing on percentage of the human vision color space covered seems much more realistic. – mattdm Apr 10 '11 at 11:59
  • I did focus on the human vision color space, I said "2/3 of the visible wavelengths" are discarded. The simple fact is that for any one output pixel from a bayer image only one of red, green, or blue spectral data was recorded, the remaining wavelengths were all blocked by the bayer filter and the pixel has to get the rest of the color data from surrounding pixels. – Kendall Helmstetter Gelner Apr 10 '11 at 23:37
3

Foveon sensors are great in theory, but in practice they aren't a compelling choice. They're generally much lower resolution and can only compete by counting the 3 sensors at each pixel position to be individual pixels.

Sigma still produces cameras with Foveon sensors: http://blog.sigmaphoto.com/2011/faqs-the-sigma-camera-and-its-foveon-x3-direct-image-sensor/

Mark Ransom
  • 1,495
  • 10
  • 12
  • +1 -- Does that loss of resolution affect the image quality? Sure, you've got fewer pixels, but you're getting all 24 bits per pixel, rather than 8. (No, I don't work for foveon, I'm just trying to understand ;) ) – Billy ONeal Apr 09 '11 at 19:05
  • True. It turns out that most people live better with color accuracy they get our of ~14MPix bayer-interpolated sensor, that true 24bit colors out of Foveon DSLR that has only 5Mpix resolution. – che Apr 09 '11 at 19:06
  • Also, demosaicing algorithms have gotten a lot better since the early days, which mitigates Foveon's advantage. – coneslayer Apr 09 '11 at 20:01
  • Foveon processing has also gotten better, which mitigates some bayer advantages. And Bayer systems are pretty well understood at this point where Foveon algorithms are not. – Kendall Helmstetter Gelner Apr 10 '11 at 06:28
  • 2
    Come to think of it, your statement about counting pixels seems kind of backwards. A 15 MP bayer camera has exactly one photosite (either red, green, or blue) at any location, yet counts a total of three of them at each location (the combination of red, green, blue) to give you that 15MP output number. You seem to be saying Foveon is misleading you while not acknowledging Bayer is doing the same thing from the other end, pretending they have 15MP of data when they really have less. How much resolution has a 15MP bayer camera got when you put on a red filter? 3.75MP of data recorded. – Kendall Helmstetter Gelner Apr 10 '11 at 06:56
  • 1
    @Kendall: Technically speaking, a 15mp bayer sensor counts INTERSECTIONS between quads of pixels, in terms of the image produced. Bayer doesn't have less than 15mp, it simply interprets the information at each point that represents an image pixel in a certain way. All things being equal, the human eye works more like a bayer array than a Foveon, and our visual acuity/color perception is superb. I think you put too much negative weight on bayer sampling than it deserves, and too much bonus on foveon sampling. Both technologies have their pros and cons, foveons are just different than bayers. – jrista Apr 11 '11 at 05:17
  • Bayer does have less than 15MP in a bayer image - the process you described is upsampling data to arrive at the output image. In a 15MP bayer imager you have 3.75 million sensors gathering red channel data. In the output image you have 15 million red data points. Somewhere data was extrapolated from a much smaller set. – Kendall Helmstetter Gelner Apr 11 '11 at 16:39
  • 1
    @Kendall, although each pixel of a Bayer array has a filter in front of it, they are still individual pixels with their own spatial characteristics. Sophisticated interpolation allows the red channel to incorporate information from the green and blue channels as well. – Mark Ransom Apr 11 '11 at 16:43
  • If I have a 15mp sensor such as in the Canon 500D, the final output image is 4725x3168 pixels. Multiplying those two numbers together, I have 14,968,800 pixels, or 14.9mp. The bayer sensor has 15.1mp, which includes additional border pixels to account for the way bayer interpolation works, as well as provide for calibration and design mechanics. Regardless, my final image is 14.9mp, or 15mp when nicely rounded. That is REAL resolution...not fake resolution. Its not 800 lines of resolution, its an actual, physical 3168 lines, and the photos look great. Lets not mince words here. – jrista Apr 11 '11 at 19:40
  • In your Canon 500D, the truth of the matter is that your input red sensor count is 3.75 million, your input blue sensor count is 3.75 million, your input green sensor count is 7.5 million. When you add those together you get 15 million. But in your output IMAGE you have an image with 15 million red values, 15 million green values, 15 million blue values. That's a total of 45 million data points, created out of 15 million. The 30 million values not originally recorded are data wholly created by estimating, it might be very good but it's still not "real" data. – Kendall Helmstetter Gelner Apr 11 '11 at 20:07
  • You will see the truth of the matter when you compare a 15 million output pixel image from a 45MP SD-1, compared to a 15MP bayer camera image. Then you will understand directly that in fact all along the bayer images you have been looking at have all been upsampled. – Kendall Helmstetter Gelner Apr 11 '11 at 20:08
  • @mark: A pixel in an output image is composed of red, green, and blue values. The "pixel" you are referring to for a bayer sensor has only a single color channel value, the rest have to be extrapolated from pixels nearby. If you have a red leaf in an image you are capturing that ends up being a single pixel in the output, that leaf will be lost unless is happens to hit one of the 1/4 of the sensors on the bayer imager that is red. We know the problem is real because we see it occur with color moire, where the colors in the scene do not hit the right sensors and color data is confused. – Kendall Helmstetter Gelner Apr 11 '11 at 20:12
  • In the end the Bayer system is a clever form of compressing images without losing much image data. But it still loses some, and more importantly loses detail varying on scene color, which makes for inconsistent sharpness across an image that should ordinarily vary by depth of field, not color. – Kendall Helmstetter Gelner Apr 11 '11 at 20:14
  • @Kendall, just to be clear I'm using the term "pixel" here to represent the smallest addressable area on the image plane. I believe this is the definition from the dawn of digital imaging. The definition is independent of the color information available at each address. I hadn't realized the SD-1 was competitive in pixel addressing, I'd love to see the output. – Mark Ransom Apr 11 '11 at 22:04
  • The SD-1 has three layers of 15 million sensors, and according to the patent the blue layer looks to be subdivided really into a number of different photosensors too. I also would like to see the output from the SD-1, and hope it is not much delayed from the earthquake... – Kendall Helmstetter Gelner Apr 12 '11 at 18:36
3

What happened to the Foveon sensor is that Sigma adopted the technology early on, but other camera companies were reluctant to do so.

That state continues to this day. Sigma continues to evolve cameras, currently offering an SD-15 DSLR, and the fixed-focal length large sensor compact cameras DP-1 and DP-2.

However recently Foveon technology seems to have been on the upswing. As another post mentioned, Sigma seems close to releasing a greatly improved Foveon sensor in the SD-1 with even better noise handling, and resolution that exceeds pretty much any consumer DSLR today (though not medium format systems). The new sensor is known to be roughly 46MP, which translated into Bayer equivalence means around 30MP of roughly equal detail to a Bayer image - that is to say, if you took the 15 million pixel output image from a RAW converted from an SD-1, and upsampled it to 30MP it would look identical to a 30MP bayer image. Only it would also lack color pattern issues a Bayer sensor might have, and have better falloff in detail. Foveon sensors have traditionally held a large dynamic range, and also very low noise at lower ISOs, but since the new sensor seems so different we need to wait to see what the characteristics are like going forward.

So what has changed for the better that allows for such advances? It' partly because we are seeing the result of steady R&D work at Foveon, but also because Sigma bought Foveon and have them focused now wholly on producing better large camera sensors. Before Foveon was trying to see what segment of the photographic market might make a good customer for the technology and as a result was a lot more scattered in goals.

Not only are the results of this focus seen in really significant resolution increases from the sensor over previous generations, but also that they technology was selected to go to Mars by the ESA:

http://translate.google.com/translate?hl=da&sl=ko&tl=en&u=http%3A%2F%2Fwww.styledb.com%2Fbbs%2Fboard.php%3Fbo_table%3DB08_news%26wr_id%3D102

Sorry for the rough translation, I cannot find a single other source for that news.

So basically what's happening for Foveon technology is that it's still evolving, just at what was seemingly a slower pace than other sensor technologies but what may end up being a leap ahead of them. We need to see what the new sensor can do to see where the state of Foveon technology really sits these days, so really this is probably a great question to review in three months time.

If you really want more information on just how it is a 15 million Foveon output image can contain as much more more detail than a 30 MP bayer output image, read this article comparing a 4.7MP Foveon sensor to a 12MP Bayer one (the Canon 5D):

http://www.ddisoftware.com/sd14-5d/

Especially note color chart resolution and ponder this interesting question - a 15MP bayer camera has only 3.75 million photosites detecting red. So if you put a traditional red filter like B&W photographers like to use, all the other sensors are blacked out and you are now shooting with a 3.75MP camera. Meanwhile a 46MP Foveon sensor with three layers of 15 million photosites detecting red/green/blue (roughly) does not care what filter you put in front of it, every pixel of output will hold data from 15 million different red sensors.

That might seem an arbitrary case, but what about tone shifts in something like a red car - or a blue sky.

For those REALLY wondering where Foveon is going at a technical level, read the latest patent from Foveon basically covering the fundamentals of what is probably the SD-1 sensor:

http://www.freepatentsonline.com/y2010/0155576.html

One last thing of note is that some form of the Foveon technology, even if not the Foveon design exactly does seem to be the future of imaging - patents have started to arrive from Sony and other companies also looking at ways to layer sensors.

  • See comments on my answer. The linked-to patent covers a scheme for linking multiple "pixel sensors" so they can be read in groups, reducing the need for wiring. The need for more wiring in a smaller space is a natural problem when you stack the sensors on top of each other, so this is a solution for that. It does not, unfortunately, provide a further description of the fundamentals of the SD-1 sensor. – mattdm Apr 11 '11 at 03:45
  • @Kendall: I think you seriously need to reconsider the statement "a resolution that exceeds pretty much any consumer DSLR today". The 46mp spec of the SD1 is NOT the same in terms of image RESOLUTION as many DSLR's on the market today. Resolution refers to detail resolvability, and Sigma's misleading use of MP in their sensor leads people to make the very grave mistake you just have. The SD1 resolves 3200 lines, while the Canon 5D II resolves 3744 and the Sony A900 resolves 4032. – jrista Apr 11 '11 at 05:08
  • 1
    Resolution and MP need to be treated distinctly when talking about the SD-1 since Sigma counts all three LAYERS of sensels at each photosite to arrive at the number 46mp. Your upsampling comment is also very subjective, and not based on all the facts. The 15mp image produced by a Foveon sensor will exhibit lower moire, particularly color moire, but it most certainly has not RESOLVED greater detail. Simply put, 3200 lines of resolution is 3200 lines of resolution, and 4032 lines of resolution is 4032 lines of resolution...the latter has more detail. Upsampling never improves resolvability. – jrista Apr 11 '11 at 05:12
  • It should also be noted that human perception is most sensitive to green, less sensitive to red, and least sensitive to blue. The fact that there are half as many red/blue sensing pixels in a bayer design needs to be weighted with the simple facts of human perception. It also needs to be noted that the deficiencies of bayer interpolation used to create images is only really a problem when photographing objects of high spatial frequency that exhibit moire, and at all other times, the resulting image is plenty sufficient for the vast majority of photographs. – jrista Apr 11 '11 at 05:42
  • 2
    Finally, it should also be noted that with modern Canon cameras, the use of sRAW and mRAW can produce lower resolution images that make full use of all four bayer pixels for each image pixel. No interpolation occurs when using sRAW/mRAW, however image resolution is lower (closer to Foveon image sizes). Bayer interpolation is only used when using full RAW. I think this is a great testament to bayer's versatility, and a good indication of why Canon has not yet moved to Foveon. – jrista Apr 11 '11 at 05:45
  • The lower resolution versions are still inferior because they have discarded color data at every single spatial location. In a sharp transition from red to blue for example, the colors could be off along the edge simply because some of the THREE bayer pixels were not capturing data. Furthermore, no matter how you size it a Bayer imager as 1/4 the number of red and blue input pixels as the MP rating, so in such a reduced image you would wind up with somewhat lower resolution than a Foveon imager capturing the same amount of detail. – Kendall Helmstetter Gelner Apr 11 '11 at 16:26
  • @jrista - the whole notion that humans are "more sensitive to green" is only about detail, and ignores the fact that color photos contain, well, color. Optimizing for green means that details will appear sharper, but you have lost color accuracy across the image as a result. In real photography the accuracy of color is just as important as giving user lines they can see clearly. Also the SD-1 imager does not produce "lower color moire", it produces zero color moire since that is wholly an artifact of bayer sensor design. – Kendall Helmstetter Gelner Apr 11 '11 at 16:29
  • @jrista: 3200 lines of resolution assumes all sensors have data. In the case of using a red filter with a modern bayer camera, you do not have 3200 lines of resolution, you have 800. That is an inescapable fact of physics when you place filters over you sensor that across the whole image plane discard 2/3 of the color data presented to the imager. In real world images that means some areas of an image with primary colors do not get as much detail recorded as other areas of the scene. Otherwise you would see constant detail from shooting any color resolution chart, which you do not. – Kendall Helmstetter Gelner Apr 11 '11 at 16:32
  • @jrista: In an upsampling process, you take data from multiple surrounding pixels to create a new pixel. Sound familiar? Because when processing bayer data, you take data from multiple surrounding image sensors to create a "full" pixel. A 15MP bayer imager has only 3.75 million red sensors. The output image at 15 million pixels has 15 million pixels with red channel color data. Wave your hands all you like, in the end up have upsampled data in every color channel of the output image. That is simply how the bayer process works. – Kendall Helmstetter Gelner Apr 11 '11 at 16:35
  • @Kendall: I'd be happy to continue the debate, but you need to get some of your facts strait first. Please read up on standard bayer interpolation. You keep mentioning that three bayer pixels are used when interpolating, however outside of custom algorithms used by fringe applications like DeepSkyStacker, each RGB output pixel is produced by interpolating four pixels, an RGBG quad. Please see: http://photo.stackexchange.com/questions/9738/why-are-effective-pixels-greater-than-the-actual-resolution/9745#9745 – jrista Apr 11 '11 at 19:44
  • Regarding sRAW and mRAW, I am not sure if that is what you are referring to...however they most certainly do not discard any information. The input for each pixel is a 2x2 RGBG bayer quad. Luminance information (Y) is extracted from all four pixels, each one with specific weighting depending on the sensel color. Color information for a SINGLE output CbCr pair is produced from all four sensels, so 2 green and 1 red/1 blue. Each quad is used to produce a single putput pixel, rather than up to four like normal RAW processing...so no interpolation of any kind is occurring. – jrista Apr 11 '11 at 19:48
  • The sRAW and mRAW formats output a 14-bit YCC format, or YCbCr, which is a luminance/chrominance format. It is a high precision, full accuracy, but lower resolution image format that contains just as much color information per pixel as a Foveon. If one wanted to be precise, it contains twice as much green color and extra luminance information per output pixel as a Foveon. – jrista Apr 11 '11 at 19:50
  • The three I referred to is the minimum amount of data required to form one output pixel - I know some simpler algorithms use four, some use more. In the end it simply reinforces my point that output data is upsampled. – Kendall Helmstetter Gelner Apr 11 '11 at 20:16
  • @jrista: You cannot have as much color information from 3.75 million points of data as from 4.75 million (in both red and blue channels this would be the case). It simply is not physically possible. I agree you have more luminance data, but then past that point the sensor is like a two-year old that cannot quite color in the lines right. As stated when you optimize for luminance you then end up sacrificing color accuracy. You can get very close thanks to some clever algorithms but when you are discarding 2/3 of the scene wavelengths across the whole image, eventually you get color wrong. – Kendall Helmstetter Gelner Apr 11 '11 at 20:18
  • @Kendall: I think the simple fact that there is a TREMENDOUS volume of billions of truly fantastic, brilliantly colored photos, produced with Bayer sensors, quite simply refutes that statement. You can't deny the quality of photographs that come out of cameras with bayer sensors. I've done a lot of comparisons between Foveon and Bayer images, since Foveon technology is very intriguing and does have its merits. While they do seem to have a more brilliant blue, the differences beyond that are minimal at best, and your statements certainly don't seem to be backed up by visual fact. – jrista Apr 11 '11 at 20:49
  • And to head off the inevitable retort that people who use bayer sensors have to saturate to get full color detail, I send you to one of my own photos which had a single activity performed in post processing: slight black level clipping. The colors in this photo are REAL, RAW strait out of my bayer camera: http://jon-rista.deviantart.com/art/Fiery-February-201693211. I think the blues and reds speak for themselves. – jrista Apr 11 '11 at 20:55
  • Bayer converters often saturate somewhat heavily as part of conversion - in fact I've never seen anyone claim bayer sensors desaturate, my complaint is that lots of bayer cameras end up with colors I consider overly saturated. That image does look fairly realistic. My argument is all about color accuracy and especially color in fine detail, not saturation or color in broad strokes. The same argument (huge volume of photos with excellent color) also applies to the Foveon sensor, only it also lacks issues with color in fine details. – Kendall Helmstetter Gelner Apr 12 '11 at 18:33
  • As a pixel peeper and print fanatic, I regularly examine my printed photos under a loupe. There are never any problems with fine color grades, color definition, proper saturation (over or under), etc. The issues with bayer interpolation are at a sub-pixel level, and their visual impact is approximately 1.5 pixels in size when viewing a RAW image at 100%. Even then, bayer interpolation algorithms these days are very sophisticated, and are able to dynamically weight red, green, and blue for each pixel generated from a bayer quad. – jrista Apr 13 '11 at 02:55
  • It may be time for someone to do a more apples to apples, pixel by pixel comparison of images produced by both Bayer (RAW & s/mRAW) and Foveon cameras, and provide an objective comparison. Despite my arguing here, I don't have any particular affiliation with Bayer, and I love that Foveon exists, is pushing the envelope, and providing some competition to the big sensor makers. If I could get my hands on a Sigma with a Foveon, I would gladly do a raw comparison between it and some bayer cameras. I'd freely make all the materials available to everyone, for their own edification and verification. – jrista Apr 13 '11 at 02:58
  • Certain things that bug me about existing Foveon-vs-Bayer reviews is that visual comparisons are largely subjective, and less objective. Comparisons are made without strict guidelines and rules about how the comparison should be made, images of different resolutions are compared, original source files are rarely available (or maybe never...can't think of any instance were original RAW's were made available), etc. I think its time for that to change. There are certainly areas where Foveon will excel (moire), and areas where Bayer will excel (resolution), but it would be nice to know the truth. – jrista Apr 13 '11 at 03:01
  • Actually with the SD-1 the Foveon chip will excel in both resolution and moire handling. Even the highest resolution DSLR's will only about equal the SD-1 in terms of real resolution captured. – Kendall Helmstetter Gelner Apr 13 '11 at 05:11
  • Its that kind of statement that needs some solid, unbiased information to either back it up, or refute it. ;P – jrista Apr 13 '11 at 06:33
  • We already know from experience with current imagers that a 4.7MP Foveon imager has the same level of detail in an image as a 10-14 MP bayer imager (the link I posted comparing image output from a Canon 5D and print output from a Kodak 14n to an SD-14). We also know the SD-1 sensor has 15 millon photosites in three layers, so a lower-end estimate is that it has the same level of detail as a 30MP bayer imager. So then to judge the statement "it will excel in resolution" you have only to look, on average, at home many bayer sensor DSLR cameras there are with more than 30mp of resolution. – Kendall Helmstetter Gelner Apr 13 '11 at 15:30
3

There are two issues which have been problematic for Foveon sensors other than the problem of spatial resolution. These are both inherent to Foveon's key concept: using the spectral absorption of different depths of silicon to separate colors.

With a Bayer array, the different filters are created with dyes carefully selected to match the chosen red, green, and blue primaries. With Foveon, the distinction is entirely based on the physics of silicon, which isn't as neat a match as the marketing materials typically show. This results in the two problems.

First, the three primary colors recorded by Foveon sensors are further from the primary wavelengths that the human eye's cone cells respond to, and in fact the shape of the wavelength curve to which each depth responds is very different from that of our vision. That means the native color space of the device is a different, shifted shape from sRGB and other typical output color spaces — or from human vision. The sensor records "imaginary colors" — ones we can't really see — in some part of its color range, and other parts of the color range aren't covered perfectly. This doesn't show up as missing colors, but as a sort of color-blindness (the analogy there is actually quite good, since it's effectively the same problem), where colors which should be distinct are represented similarly.

Second, lower-frequency red light is absorbed at the deepest level, which unavoidably results in some attenuation — which means more noise in the red channel. As I understand it, noise reduction in Sigma cameras deals with this by blurring the red channel more strongly. I know that my Bayer-sensor camera exhibits, by a wide margin, more noise in the blue channel. I'm not sure if that's an inherent problem with Bayer or CMOS sensors, or if it's double problem on Foveon. (I made that its own question.)

None of this is to say that the widespread Bayer technology is perfect, or even absolutely better than Foveon. It's just that everything has its compromises, and Foveon actually turns out to have some tough ones. The big issues with Bayer (aliasing, color resolution) can be solved by throwing more pixels at the problem, given corresponding increases in noise handling. This has worked out very successfully so far, and of course it's no accident that it corresponds well to megapixel-based marketing.

Update (May 2011): Sigma has just announced the new "SD1" model, priced at around $9,700 — comparable in cost to something like the Pentax 645D medium-format camera, but with an APS-C sized sensor. It'll be interesting to see if they have, indeed, been able to address some of these issues. My speculation is that they probably have, but at the sort of cost that led them to change the target market. But even then, I'm not so sure — the maximum ISO is still 6400, which is two stops behind the current crop of Bayer sensors. (Remains to be seen, of course, if they simply decided on a more conservative limit. Without staring too much harder at the crystal ball, there's no way to tell; I'll update this again when the reviews are in, and if I'm very lucky after I get a chance to play with the camera — unfortunately unlikely at that price!)

Disclaimer: I don't have a Foveon-sensor camera (although I've used one, and it was cool!). I don't follow the technology very closely. Sigma is putting lot of research into working around or solving these problems.

mattdm
  • 143,140
  • 52
  • 417
  • 741
  • All of what you say seems to be addressed in the latest sensor design if you look at the patent. In real world shooting I have found the color data to be more accurate, sometimes a lot more accurate, on average than other people I have shot the same subjects with in a group. As for resolution, bayer has been able to keep ahead with higher resolution counts but with the SD-1 sensor the bayer sensors are not at all ahead in resolution anymore. – Kendall Helmstetter Gelner Apr 10 '11 at 23:39
  • Can you summarize the improvements? Are they basically work-arounds or is it something more clever than that? – mattdm Apr 10 '11 at 23:42
  • If you read through the patent link I posted in my response it may help. But one of them seems to be slightly different pairs (perhaps more than pairs) of blue sensors per underlying red/green photosite, that does a better job of separating out the wavelengths and possibly moving the range covered to better match the visible spectrum. Also the design supposedly reduces read noise considerably, and we have read in interviews from Sigma that the "native" ISO is now 200, where it used to be 100. – Kendall Helmstetter Gelner Apr 10 '11 at 23:54
  • Hmmm. Patents are mind-numbing to read, since they're legal documents, but on quick skim, the one you link to seems to be concerned with a more efficient means of wiring the sensor to reduce read noise, not the issues I describe. – mattdm Apr 11 '11 at 00:03
  • The extra blue sensors totally change everything you were talking about. Remember that today the Foveon sensors as they are already do an excellent job rendering colors in real-world use. – Kendall Helmstetter Gelner Apr 11 '11 at 03:08
  • I'm not getting that from reading the patent. It's simply talking about how multiple blue sensors are wired together for reads. At the beginning, the Brief Description says "The present invention provides two different ways to reduce the number of required wires through the pixel array." And as one reads the whole thing, it's about wires and couplings. This is good, in that reducing noise is good, but it really doesn't appear to relate to any of the above. – mattdm Apr 11 '11 at 03:16
  • And that said, I don't doubt that the technology is improving. I do think, though, that these issues are key to the trouble the technology has had getting traction in the marketplace. – mattdm Apr 11 '11 at 03:20
  • One hint of it is here "Each pixel sensor includes six photodiodes identified in FIG. 1 at reference numerals 12-1 through 12-6, 14-1 through 14-6, 16-1 through 16-6, and 18-1 through 18-6." Altering the graph of color ranges captured changes the issue you are discussion. Also I find the use of the term "imaginary colors" in your original response suspect. – Kendall Helmstetter Gelner Apr 11 '11 at 03:25
  • Let's move this to http://chat.stackexchange.com/rooms/367/photography-tech-chat, yeah? And then come back with a summary-of-findings. – mattdm Apr 11 '11 at 03:30
  • "imaginary colors", by the way, is a technical term, not meant to be derogatory. – mattdm Apr 11 '11 at 03:32
  • So, it seems that Kendall is, unfortunately, reading more into this particular patent than is there. There's nothing about additional blue sensors, or additional types of sensors. That doesn't mean that Sigma isn't working to improve performance surrounding these issues as well, though. I'm sure they are. – mattdm Apr 11 '11 at 12:45
  • It seems that other people are unfortunately not thinking through the implications of multiple blue sensors which have no other point than to add different wavelength detections, since it relates not at all to reducing wiring. And again, the problem mentioned with wavelengths being captured not matching human vision relate to earlier sensors, we have no such chart for the new sensor. And lastly, AGAIN I point out that in real life there were no ACTUAL problems that arose from the theoretical problem mentioned; in the real world colors came out more consistently accurate than other cameras. – Kendall Helmstetter Gelner Apr 11 '11 at 16:23
  • @Kendall, the patent just doesn't describe multiple new sensors, blue or otherwise. The invention is ways of coupling multiple sensors together, reducing wiring. So there's no implications to think through. – mattdm Apr 11 '11 at 16:31
  • As for the real world — I've definitely seen beautiful images from Foveon. It's cool and interesting technology. There's also been problems with noise, and with color shifts (particularly at higher ISO). These are related to the issues I describe, and I think they're important in the "what happened to Foveon" question. As the technology improves, I'm sure there will be more and better workarounds, reducing any real-world impact further; but, just as the Bayer array has intrinsic problems to solve, so does Foveon. (There is never a silver bullet.) – mattdm Apr 11 '11 at 16:38
  • A final challenge. Since you claim a theoretical disadvantage to the Foveon sensor, there should be real-world implications. What is the real-world problem that arises with existing Foveon cameras due to the frequency response you mentioned? Can you provide any example image, or even scenario that would show a problem in the resulting image? – Kendall Helmstetter Gelner Apr 11 '11 at 16:42
  • @mattdm: I hav enot seen color shifts in higher ISO images in the cameras I use, I have seen some reduction in saturation. Noise is a wholly different issue unrelated to frequency response, it's true the Foveon sensors have not handled noise as well as other cameras in the past, though the latest round of sensors can shoot in color to ISO 800 pretty well, and in B&W up to 3200. – Kendall Helmstetter Gelner Apr 11 '11 at 16:45
  • @matt: "Each pixel sensor includes six photodiodes identified in FIG. 1 at reference numerals 12-1 through 12-6, 14-1 through 14-6, 16-1 through 16-6, and 18-1 through 18-6" describes multiple blue sensors per underlying red/green layer. The old system has only one blue photodiode, just like the red/green layers still do. – Kendall Helmstetter Gelner Apr 11 '11 at 19:57
  • Okay, I'm sorry; reading a different section than the one you're pointing (Fig 3, rather than Fig 1) does indeed seem to imply four blue photodiodes for everyone one red and green. There's a lot of discussion of this here http://forums.dpreview.com/forums/read.asp?forum=1027&message=37967881, and speculation as to why. It remains to be seen, I guess. However, I still stick to my answer, because even if the blue pixels differ in color (uncertain!), that doesn't address the issue of the red depth or of the primaries for green and red. – mattdm Apr 11 '11 at 20:59
  • These two comments from that long thread seem particularly relevant: http://forums.dpreview.com/forums/read.asp?forum=1027&message=38065327 and http://forums.dpreview.com/forums/read.asp?forum=1027&message=38059468 (thanks @jrista) – mattdm Apr 12 '11 at 13:26
1

The biggest reason "nobody" uses Foveon, I think, has little to do with Foveon and a lot to do with Sigma. Had Canon or Sony bought up the tech instead of Sigma, it would be mainstream by now, the basic idea is a good one. Sigma is a bit-player in this field, too small to do it all by themselves, and Sigma cameras are something of an acquired taste.

Staale S
  • 7,514
  • 24
  • 32
  • 1
    Okay then; why didn't Canon or Nikon jump on it then? I'm sure it was pitched to them; they must have had some problem with it in order to reject it... – Billy ONeal Apr 10 '11 at 16:36
  • This is very true but one part of the core question is why a larger camera manufacturer is not and has not tried using the Foveon technology in a camera. – Kendall Helmstetter Gelner Apr 10 '11 at 23:42
  • I would say the reason is a base of investment. Other sensor manufacturers have an extensive existing base of design, infrastructure, manufacturing, and support for bayer-type sensors. It can cost hundreds of millions to even billions to invest in new CMOS design and manufacturing. Despite Kendall's admirable dedication to Foveon, the differences between the two technologies are not nearly as large as they are often made out to be. Canon and Sony (as Nikon currently uses Sony sensors) have little reason to change yet. – jrista Apr 11 '11 at 05:38
0

The sensor is fine ... or at least it was up to the 45Mp Merrill version. With the later Quattro version Sigma has abandoned the "pure" approach of capturing three colours at each location for a compromise, with fewer sensors in the lower layers.

But the sensor is not the problem. Anyone using it knows that it excels at low ISO, but is inferior to Bayer sensors with comparable REAL resolution at high ISO.

The real problem is that Sigma cameras are frustratingly slow and inconvenient to use, especially because of the absurdly slow write times. In the early days of affordable digital cameras we'd have been delighted with the SD1, but once you have got used to the speed of a good DSLR from Nikon or Canon it is hard to go back to waiting for two minutes for a burst of 7 shots to write to the card, and until that completes you cannot check your exposures, and you do not have full use of the camera's controls.

What is more, the camera makers continue to wring more and more performance out of the Bayer technology. It reminds me of the Porsche 911. The engine is in the wrong place, but with enough clever engineering the car can be made to handle as well as many better balanced front or mid-engined machines.

KTR
  • 1
  • 1