11

To filling my appetite on camera things hunger I've came across to Sigma website and found this 3 layer sensor stuff.

Can anyone really explain this based on their experience or research about this?

Does anybody has a hands on this sigma SD15 or sigma SD1 DSLR since I was only directed and influenced to the big brand in this industry?

Imre
  • 31,966
  • 11
  • 107
  • 177
Nazrul Muhaimin
  • 265
  • 3
  • 10

3 Answers3

11

The Bayer sensor used by the vast majority of cameras is basically a two-by-two grid of sensors with 1 blue, 1 red, and 2 green sensors known as a Bayer filter named after the Kodak Labs scientist that came up with it. The data from such a sensor then must go through a demosaicing process that converts the 4 data points into a pixel giving the result of the 3 color merge. The reason for 2 green sites is that the human eye is reported to be more sensitive to green and so the color is emphasized in the system.

The Foveon model, which totally fascinates me, is an approach to follow a more traditional film style. In this context, the idea is that the three primary bands of light operate at different wavelengths and so penetrate the sensor material to different depths, the premise of color film. In this case, blue is the least penetrating and red the most, so by stacking the layers, they can detect at each photo site the level of each of the primary colors. The technology, as a result, eliminates the moire pattern than can result from the demosaicing algorithms associated with a Bayer filter and give a more accurate result.

I'm really excited about the Foveon technology and I'm looking forward to seeing where Sigma takes it. They've finally produced and APS-C camera with this sensor, so when the reviews and samples finally hit, I'm going to be looking at them closely. Having said that, I think the camera makers have done a very good job with the Bayer model, it's a proven and well-understood means of image capture and that can be seen from the often stunning results. If the Foveon exceeds that, we're in photography nirvana. :)

Anyways, I linked some relevant Wiki articles on the two which I think will really help you see the differences.

Joanne C
  • 33,138
  • 4
  • 72
  • 132
  • There are some practical problems which keep Foveon from reaching nirvana. Some good comments here: http://theonlinephotographer.typepad.com/the_online_photographer/2010/09/huge-new-foveon.html?cid=6a00df351e888f88340133f46d9bfa970b#comment-6a00df351e888f88340133f46d9bfa970b – mattdm Jan 20 '11 at 02:52
  • @mattdm - The commentary is interesting, but it's worth noting that much will be coming from people that aren't physicists and so may not either fully understand or be aware of some of the more intense aspects of the science. I don't pretend to either, despite having studied physics for a couple of years in university, so I'm really more interested in the real world results Sigma produces from this. – Joanne C Jan 20 '11 at 03:04
  • 2
    In the real world, foveon photos don't really look that much different than bayer photos. Color saturation is similar, perhaps a tad better blues. One of the primary differences is the lack of color moire in Foveon, and another is the relatively low image pixel count (14mp is the largest Foveon, while we are pushing 24mp and beyond with bayer FF, 80mp with MF.) It should be noted that monochrome moire is NOT eliminated on foveon (only color moire)! Any device that has a limited resolution will encounter moire when imaging frequencies beyond its nyquist limit, including a Foveon. – jrista Jan 20 '11 at 03:15
  • 2
    @jrista-- I couldn't disagree with you more about the look and feel of Foveon images. I have a dp2 and a nikon d300, and have produced 13x19 prints with both cameras (using full-chip images from both). First, no one can tell that they are taken at different resolutions, and second, people can definitely tell that they are different cameras. The saturations are different, the detail resolution is different-- the feel is just different. Some people prefer the d300, others the dp2-- my walls have become a bit of a Rorschach test for sensor style. – mmr Jan 20 '11 at 03:23
  • 2
    @jrista - I don't agree. First, I don't think 14mp is "low" on an APS-C sensor, heck Nikon is pushing a 12mp full frame camera and it's getting stunning reviews. Evidence, yet again, that the megapixel count isn't the whole story. Second, the Foveon technology is in infancy compared to the Bayer model and is producing at least as good a result and, in some cases, better. That's darn exciting. Let's not get wedded to a technology here, Sigma may yet produce something better than Kodak has and that's a good thing. – Joanne C Jan 20 '11 at 04:10
  • @mmr: I've seen prints from a DP2, as well as prints from my 450D and a 5DII. From a detail level, the 5DII stomps all over the DP2, despite the fact that it is a bayer. Blue saturation is better from the DP2, but other than that, colors from the DP2 and Canon's are about the same. Granted, better blue saturation is a great thing, but a very slight saturation bump in post can normalize such a difference between any camera. Now, MP for MP, I would say the Foveon definitely has the edge on detail, no question there. My point was that the LARGEST Foveon is 14mp, while FF sensors are 24-28mp now. – jrista Jan 20 '11 at 06:34
  • Joanne C: I don't think I made my point as clear as I wanted...14mp is the largest Foveon sensor, in APS-C format. Outside of some improvement to saturation, FF Bayer sensors like the 5DII's stomp all over a Foveon in the high resolution detail area. Thats to say nothing of the astounding quality you can get from a 60-80mp medium format bayer. On a megapixel-to-megapixel comparison, a Foveon 14mp sensor will have clearer detail and better saturation than a 14mp bayer. Saturation is easy to fix post process, however the detail is still a win. – jrista Jan 20 '11 at 06:37
  • 1
    I was a BIG Foveon fanatic for a long time before I actually bought a camera. I really like the merits of the technology, and I think it has potential...especially if Canon and Nikon can license it. My worry is that it is in Sigma's hands. It has taken them years to announce the 15.3mp APS-C, and the DP2 has barely been able to take off. Sigma doesn't execute well, even if the technology is superb, and that could very well spell the doom of the technology. I would love to see them license the technology, and get a Juggernaut like Canon to release a 21mp Foveon. I'd buy one in a heartbeat. – jrista Jan 20 '11 at 06:46
  • 2
    @jrista Are you talking about 14 million photosites, or 14 million total colour-sensing elements? A Foveon sensor with 14 million photosites would do a hell of a lot better than a Bayer with 14 million photosites, probably better than a 24MP Bayer, and hence is not low res by today's standards. However such a camera (the SD1) has not been released yet. A Foveon sensor with 14 million colour sensels but only 4.5 million photosites (like the SD15) will do worse than a 14mp Bayer. – Matt Grum Jan 20 '11 at 11:47
  • I like the direction Foveon has taken but read in a number of places that there are kinks that need to be worked out still. I am not sure what the kinks are but much mention of its shortcomings are abound. – kacalapy Jan 20 '11 at 14:47
  • 1
    @jrista, now we definitely agree-- Sigma has not been very good in the execution of the chip. It is clear that they lack the UI engineering to make the dp series of cameras into the powerhouses that they should be on paper. I fantasize that they will open their firmware so that those of us with computer science chops can take a hack at it, like CHDK. I also fantasize that their SLR bodies will have Nikon and Canon mounts. While I'm at it, I also want a pony and world peace-- I think that all four desires are on the same likelihood of happening. – mmr Jan 20 '11 at 16:04
  • @Matt: I was referring to the DP2, the 4.6mp sensor that has 14.4 million photosites. Sorry for the confusion. I really hope that they can actually release the 15.3mp SD1. If they do, that would actually make a competitive product against things like the 5DII's 21.1mp sensor and Sony's 24mp sensor. The real question, though...is will the SD1 be competitive in other areas as well...or will it suffer from the same poor UI/functionality quality as the other Sigmas? – jrista Jan 20 '11 at 17:32
  • @mmr: Aye, Sigma doesn't have the UI engineering at all...thats one of the things I don't like about the DP2, its rather clunky and very limited. I don't think I would ever fantasize about their SLR bodies having a Nikon or Canon mount...I just want them to start licensing the Foveon technology to other manufacturers. They would still make bank, the technology would probably take off like wildfire, and people like us could get one in our preferred brand/format. Imagine even an MF sensor using Foveon...oh, the possibilities. Sadly, I really think Foveon will die a slow death w/ Sigma. :'( – jrista Jan 20 '11 at 17:36
  • @jrista at the risk of pouring on more confusion I would say the DP2 only has only 4.6 million photosites but 14 million colour sensing elements. What it doesn't have however is 14 million pixels, producing as it does a 2640 x 1760 pixel image. – Matt Grum Jan 20 '11 at 18:03
  • @matt: A Bayer 14MP camera does not have 14 million pixels either, since any given pixel was created from only 1/3 of the original color spectrum at that physical location. By your terminology you would have to say a 14 MP bayer camera has 7 million photosites, because that is the largest count for any of the color channels, and the other "color sensing elements" are just informing the green image as to color. In reality though, the term "photosite" in fact are what you refer to as "color sensing elements". – Kendall Helmstetter Gelner Jan 20 '11 at 19:18
  • 1
    @Kendall Helmstetter Gelner regardless of how much of the colour spectrum is recorded at a physical location there is still a pixel, sensel, photosite or whatever you want to call it there. So the 14MP does make sense, incidentally the raw images before interpolation have 14 million pixels in them. Raw images from the Sigma have 4.6 million pixels in them, although there are three values per pixel, that doesn't make it three times the pixels! If I take a regular image and add an alpha channel there is now 4 values per pixel, but you wouldn't say the number of pixels has increased. – Matt Grum Jan 20 '11 at 20:44
  • 1
    Wow, who knew that one small answer would generate so much interesting commentary. :) I semi-agree with jrista around the future of Foveon in the hands of Sigma, but you never know, giants can fall, just ask Pentax... – Joanne C Jan 20 '11 at 20:48
  • @matt: If your definition of a pixel is three recorded values at the same location, the Foveon sensor has 4.6 million of them and the Bayer 14MP sensor has zero, since there are no R G B sensor recordings that "belong" with each other. There are only three separate images which combined form the final color. The closest you can get is the 7.5MP green image which guesses at colors based on the other two images. The problem is you are trying to claim a single color channel in a Bayer image is a "pixel" while then also claiming the Foveon sensor needs three values to make a pixel. – Kendall Helmstetter Gelner Jan 20 '11 at 21:43
  • @Matt Grum: "I would say the DP2 only has only 4.6 million photosites but 14 million colour sensing elements". I would agree there, my use of the term photosite is based on Bayer, and on Foveon it would mean what you describe. The DP2 is a sensor with 4.6 million photosites, comprised of 3 sensels each. That would be 14.4 million color sensing elements total. Now that we have that cleared up... – jrista Jan 20 '11 at 22:12
  • It looks like we've have gone off on a tangent about the definition of terms. Lets try to normalize a bit here, so we can have a meaningful discussion. ;) So, I propose: Sensel = single color sensing element, Photosite = single photo-sensitive site on a sensor that may sense one color (Bayer) or three colors (Foveon), Pixel = standard picture element comprised of three channel elements: Red, Green, Blue. Given these definitions: Sigma DP2 => 4.6megaPIXEL sensor of 4.6megaPHOTOSITES comprised of 14.4megaSENSEL. Bayer 14mp => 14.4megaPIXEL sensor, 14.4megaPHOTOSITES, 14.4megaSENSEL. – jrista Jan 20 '11 at 22:20
  • In a Bayer sensor, pixels are demosaiced in an 'overlapping' fashion, so every possible combination of 2x2 sets of RGBG are processed to generate pixels. In other words, an image pixel is defined by the intersection between each SENSEL of a 2x2 cell of RGBG SENSELS. For a visual example of this: http://www.cambridgeincolour.com/tutorials/camera-sensors.htm (BAYER DEMOSAICING section.) – jrista Jan 20 '11 at 22:33
  • @Kendall "If your definition of a pixel is three recorded values at the same location, the Foveon sensor has 4.6 million of them and the Bayer 14MP sensor has zero" no, my definition of a pixel is any number of recorded values at the same location it's the location that is key. With this definition the DP2 has 4.6 million and the 5D mkII 21 million. That's not to say anything at all about the relative image quality or sharpness of the two cameras, I'm just defining terms in an intuitive manner. – Matt Grum Jan 20 '11 at 23:52
  • @matt: It doesn't make any sense to refer to a pixel as "any number of recorded values at a physical location" because then the number tells you nothing. You cannot then say if the pixel is usable by itself or if it requires input from other pixels. It also greatly confuses things for people because the output image is referred to in "pixel" dimensions also, and that does have a hard and fast rule that a pixel has three color channels. Something is plainly wrong with a definition where 21 million color separated "pixels" magically acquire two more color channels in output. – Kendall Helmstetter Gelner Jan 21 '11 at 05:05
  • @jrista: Since other people use the term "sensel" I can live with that, and with your definition of "photosite" although it still seems wrong to use a term that refers to an arbitrary number of sensors. But I can't argue that your resulting description of the Bayer and Foveon cameras respectively is pretty clear. – Kendall Helmstetter Gelner Jan 21 '11 at 05:07
  • @Kendall I agree that the number of pixels alone doesn't tell the whole story, but that doesn't change the definition of a pixel. It's a picture element, a small square that makes up part of an image. There's a company in America that will remove the Bayer colour filter array from your camera. Currently they only do the 450D. But if I have one converted I wouldn't suddenly start calling it 4 megapixel camera. It's a 12 megapixel black and white camera. – Matt Grum Jan 21 '11 at 09:30
  • @Kendall: "Something is plainly wrong with a definition where 21 million color separated "pixels" magically acquire two more color channels in output." That is why we need to normalize terms. A pixel is part of an image, not a physical device like a sensor. A pixel is a "PIcture ELement", part of an image. Megapixels generally refers to the IMAGE SIZE produced by a sensor. How you arrive at those pixels can vary. Just because a Bayer sensor interpolates its data doesn't inherently make it "wrong". The demosaicing algorithms today are very advanced, maximizing the info gathered. – jrista Jan 21 '11 at 17:21
  • Regarding 21.1 million photosites (each providing a single sensel in Bayer) do not directly map 1:1 to single pixels. There is overlap, and some "extra" photosites. Demosaicing algorithms also weight the color and luminance information to produce an accurate result. Color and luminance wise, I think there is minimal difference between Foveon and Bayer. A real difference would be color moire, which can indeed affect the quality of output and maximum resolution of a Bayer sensor. On the flip side, without an AA filter, Foveon suffers from more pronounced Monochrome moire. Pros and Cons both! – jrista Jan 21 '11 at 17:25
  • Here is a little question for you all: Is the human retina more like a bayer array, or foveon stack? Is any one design particularly better than the other? Or are there simply different trade-offs? – jrista Jan 21 '11 at 17:27
  • @jrista: I already agreed with your definition, talk to Matt about his definition for a Pixel which is at odds with yours. Yes how you can arrive at those pixels varies but you simply cannot use less than three to make a complete output pixel. @matt: It's always a 12 MP sensor because the accepted definition of "MP" in describing a camera is really giving you the sensel count. After converting to B&W you are getting a camera that is actually recording as many values as it is outputting, instead of fewer. – Kendall Helmstetter Gelner Jan 21 '11 at 22:35
  • I don't think me and jrista are in disagreement (for a change ;) about the definition of a pixel, I'm just taking the definition of a picture element and extending it to the sensor (something the camera manufacturers did a while ago when coining the term "megapixel") and just as you can have images with one or more components per pixel, you can also have sensors with one or more light sensitive elements per pixel. Just because the sensel count matches the pixel count in a Bayer camera doesn't mean a camera with 14 million sensels also has 14 million pixels... – Matt Grum Jan 21 '11 at 23:36
  • @jrista Is any one design particularly better than the other? I think the Foveon design is clearly better (in anything but very low light) however I don't think it's three times better (the consensus seems to be it's more like 2x) – Matt Grum Jan 21 '11 at 23:38
  • I think neither design is clearly better than the other. There are obvious benefits to Foveon, however there are still benefits to Bayer that Foveon currently can't touch, and may never exceed (as both will progress.) An easy one is how far Bayer has taken us in terms of size and resolution...a prime example of which is the 80mp Leaf Aptus II medium format sensor. I also don't think that Bayer is a particularly flawed design, given that our retinas are basically designed the same way (perhaps more like Fuji's SuperCCD SR between our rods and cones). – jrista Jan 22 '11 at 00:29
  • I should have stated I was talking about the pure design being better from a photographic point of view, if you compare the SD15 to a 4.5MP Bayer sensor the difference is clear. The Foveon is however more difficult manufacture, which you could say is a fault in the design... – Matt Grum Jan 22 '11 at 09:21
7

I have been shooting Sigma DSLR's for a number of years, since the SD-9. I got into the system when I was moving out of film SLRs into digital and did a lot of research before I made the leap. I too came across the the Foveon chip and the design of it struck me as much more sound than the Bayer design on a conceptual level; plus I really liked the images I saw coming from the camera.

The way to think about the difference here is that a traditional Bayer sensor it really taking three separate photos - one green, one red, one blue. For a 14MP Bayer sensor the green photo is has 7 million pixels, while the red and blue images have 3.5 million pixels of data. None of that data spatially overlaps; that is to say if an object were just one pixel high as captured by the sensor it could vanish in any one of the images depending on color. At any given spatial location 2/3 of the color data is discarded. So while the output you get from a 14MP camera might have 14 million pixels in it, it's essentially a re-sampled and upsized version of the image with greatest detail - the 7 MP green image.

On the foveon side, there is no-where a color in the image can "hide" because at any given sensing location, the full spectrum of light is captured by the three layers of sensors and so there is not as great a need from input from neighbors to resolve what the sensor saw.

The end effect is that Foveon sensors will not be fooled into thinking fine detail is really some kind of color (color moire), and the level of detail captured is constant because no fine detail is accidentally discarded. The bayer sensor discarding 2/3 of the light at any point can sometimes drop fine detail that the Foveon chip will resolve - again it depends on scene color.

Because the level of detail in a Bayer sensor is variable, it can be very hard to compare with the Foveon chip as far as detail captured - but a rough rule of thumb is that a Foveon image will capture around the same level of detail as a Bayer camera with 2/3 of the Foveon MP rating (or sensor count). So for example the upcoming SD1 has 46 million photosites (sensors) which means you can expect similar levels of detail to a 30MP bayer image. But this is again an image without color moire, without an AA filter in front of the filter (when you don't worry about color moire you don't need an AA filter).

You can see some interesting examples comparing the original Canon 5D to the Sigma SD-14 here:

http://www.ddisoftware.com/sd14-5d/

Especially note what happens shooting color targets to get a sense of how detail can vary.

So all the technical stuff aside, what does the sensor do well with? Because it's capturing the full spectrum at every pixel and the same level of resolution regardless of color, I think it captures subtle tonal changes really well. That means really nice skies, or anything else with gradual changes in color or tone. As such they produce really nice images for B&W conversion also, because of the very smooth transitions between tones.

http://www.pbase.com/kgelner/image/90304998 alt text

http://www.flickr.com/photos/kigiphoto/5308324073/in/set-72157625711613108/ alt text

http://www.pbase.com/kgelner/image/108588990 alt text

(full size versions of each of those images can be found at the links).

Where the sensor has had issues, is with higher ISO - the current cameras can do ISO 3200 when asked:

http://www.flickr.com/photos/kigiphoto/4684772878/in/set-72157624236424558/ alt text

but really 800 is more of a realistic limit to most shooting (unless you are shooting for B&W and then those images can hold up really well because of the nature of the noise).

The Sigma cameras are not really oriented to people starting out with photography, because they don't offer a lot of assist modes or things of that nature... so be aware of that if you are thinking of getting into the system. The easiest way to get into trying out the sensor for yourself is the Sigma DP-1 or DP-2, earlier versions of the cameras can be slower to use but all of them will give you a good taste for the detail and color the images capture.

Note that I am obviously not an unbiased source, since I have enjoyed using the cameras for a long time. So the other thing to do even before getting a camera is to go exploring images from the sensor in more detail. I provides some above and you can explore my sites as I generally only shoot Sigma cameras, but you can find a ton of example images from all of the various cameras Sigma has produced here (also with full size images to be found):

http://www.pbase.com/sigmadslr

Also you can find a ton of great info at Carl Rytterfalk's blog:

http://www.rytterfalk.com/

Somewhere in there he has sample RAW packs you can download, and various things talking about Sigma cameras, lenses, and the Foveon sensor. He's a great photographer and very enthusiastic as you'll see if you watch any of his videos.

EDIT: Carl has just written a lengthy post of "Why I use Sigma", which directly applies to this question:

http://www.rytterfalk.com/2011/01/20/why-i-choose-sigma/

The summary of his reasons are:

  1. Nuances (in color)
  2. Density
  3. Micro contrast
  4. True sharpness
  5. Dynamic Range

Which he goes into in more detail at the link, along with some more images.

One side note I forgot to mention, that is not really directly about the sensor, but is about the Sigma specific DSLR's that house the Foveon chip - you can easily use them for IR work as well just by removing the dust protector on the camera (built to be user removable and reinstalls without any tools).

5

I have lots of praise for Sigma for trying something different and innovative, and on paper the Foveon sensor is a very good idea. However I disagree with the way Sigma refer to their current model with 4.6 million photosites (each of which is sensitive to colour as well as intensity) as have a 14 megapixel sensor!

Multiplying the number of photosites by three to get the Bayer equivalent would be ok if the colour channels were uncorrelated with each other. However in real scenes the colour channels vary from mildly correlated to strongly correlated. Take this following example:

You have a 5MP Foveon sensor and a 15MP Bayer sensor. Each sensor has 5 million red pixels 5 million green pixels and 5 million blue pixels. You are photographing a grey cat sat on a big block of grey concrete. As the light coming from the scene is all grey, the red green and blue pixels in each sensor all receive the same amount of light. However in the Foveon sensor you end up with three identical readings on top of each other which is not very useful, giving only 5 million unique data values. In the Bayer sensor they are displaced laterally giving a potential 15 million unique values. The Bayer image would not even need demosiacing, so would contain a lot more detail.

This is a very contrived example, however correlated colour channels do occur quite often, and this is why Bayer interpolations works. When photographing a yellow object the red reading gives you information about what the green reading would be even though unlike the Foveon there is no green pixel there.

In real world testing due to correlation the resolution is equivalent to just over 2x the Bayer, not the 3x Sigma claim. This means the current flagship Foveon model with 4.6 million photosites is roughly equivalent to a 10 megapixel Bayer (though they will still have slightly different qualities, lack of colour Moire in the Foveon for example). This leaves Foveon lagging a bit behind the 24MP 35mm DSLRs. The current Foveon also struggles in low light as light has to penetrate two layers above in order reach the final layer.

The Future:

So based on that my current advice would be to go with a Bayer camera, however it will be interesting to see what the future holds. After a long hiatus Sigma have announced the SD1 with a 15.4 million photosites. There's no release date yet but if they can pull this off in a decent body it would give the 24MP Nikon D3x a serious run for it's money!

On the other side of the coin Bayer resolutions go up at a steady pace and are backed by simply economics (more people are making Bayers in larger numbers). As sensor resolution increases, whithout corresponding improvements to lens sharpness, Moire and other Bayer artifacts become much less of a problem. Eventually a Bayer sensor with high enough megapixel count will give you the same effect as the Foveon, but with the pixels side by side not on top of each other.

Matt Grum
  • 118,892
  • 5
  • 274
  • 436
  • 1
    In the Bayer sensor you have 7.5 million green photosites, and 3.75 million red and green photosites. Your example is correct in that a totally neutral subject will give the maximum amount of data, although even in that example because there is no overlap between red/green/blue sensors you would potentially see some color show up in the demosiacing when there was difference in luminance between the cat and the background. But in reality how many things are grey, and how many things show some degree of color? You are also wrong about the SD1, it has 45 million photosites (distinct sensors). – Kendall Helmstetter Gelner Jan 20 '11 at 19:11
  • Kendall would be correct here. With a 15mp Bayer, you have 7.5 million green, and 3.75 each red and blue, rather than an even number of red, green, and blue. That makes sense though, as our eyesight is more sensitive to green as well. I wouldn't necessarily say that Bayer gathering twice as much green info as red/blue info is a detriment in any way. @Kendall: As for the SD1, Matt is correct in that it has 15.4 million PHOTOSITES, or individual photosensitive locations on the sensor. Each PHOTOSITE is capable of sensing three different colors, and therefor has 46.2 million SENSELS. – jrista Jan 20 '11 at 22:37
  • 1
    I think we've covered megapixels vs. sensels adequately in the other question. With regards to my cat example, I accept that it is a very rare occurrence to have an entirely monotone scene (a point I accept in the answer) but I also go on to say that in most scenes you may not have three colour channels in complete agreement but you are likely to have colour channels that highly correlate with each other. You could have a very garish scene with bright cyan, shocking pink and luminous yellow, and still have two identical readings per Foveon photosite! – Matt Grum Jan 20 '11 at 23:58
  • That's not how photosites work. You have a photosite that records levels at a single location - in the Foveon chip three photosites are stacked, each measuring different values. The values from those three photosites are directly input to a final output pixel. In the Bayer chip, for any output pixel you have only one "base" photosite, in one channel - then you borrow from surrounding photosites to determine color. The effect is that while there are 14 million output pixels, you really are taking smaller images and upsampling them to get the output image. – Kendall Helmstetter Gelner Jan 21 '11 at 04:49
  • @matt: Not sure I understand the example, in each of those colors each of the three photosites at a location would be excited differently. The point is that if you had a shocking pink thread it would not suddenly become blue or purple just because the background was blue and it had to "borrow" heavily from the background to guess at color. – Kendall Helmstetter Gelner Jan 21 '11 at 04:54
  • 2
    It's not fair to say the Bater sensor demosaicing is effectively upsampling the colour channels, what's going on with algorithms like the adaptive homogeneity-directed interpolation is far more sophisticated and exploits strong statistical correlations between colour channels that occur in real images to do much better than just filling in the gaps. – Matt Grum Jan 21 '11 at 23:45
  • 1
    If you have large areas of different intensities of pure magenta then you will indeed find the red and blue sensels at each pixel are recording the same values as magenta is the mixture of equal parts red and blue. Yes if you have a thread one pixel wide the Bayer wont be able to see it, but if you have three times the number of pixels in a Bayer sensor it should be able to cover the thread with more than one pixel. Anyway one pixel threads sharply resolved by the lens are just as rare as grey cats... – Matt Grum Jan 21 '11 at 23:49
  • My cameras resolve one-pixel wide thread, hair grass, branches, and other objects all the time. That's because nature and most objects offer detail that is similar as it recedes. There's nothing rare at all about single-pixel lines, look at the full size version of the boat image I posted at the rigging on the boat. – Kendall Helmstetter Gelner Jan 22 '11 at 09:52
  • The boat riggings are about 2 pixels wide, and with a 15mp Bayer sensor you'd expect to cover them with more than one photosite. The Foveon definitely has an advantage with fine colour details, but eventually a Bayer sensor with sufficient megapixel count will overcome that... which is why it will be very interesting if they can pull off the SD1 as it will put Simga in the lead, at least for a while. – Matt Grum Jan 22 '11 at 10:05