-2

I mean a device that has two sensor / lens combinations.

I can see several advantages:

  • Capturing photos with a higher dynamic range.

  • Focus bracketing.

  • Capturing multiple photos and merging them to reduce noise (super-resolution), like the excellent iPhone app Cortex Camera, but without needing the user to hold the camera steady for a while, and hoping nothing moves in the scene.

  • One of the cameras can be made without a color filter array, to improve low-light performance.

  • Two smaller sensors can be thinner than a single bigger one. For example, if you were to increase the sensor size in the iPhone 5s, the camera module would probably not fit in the thickness of the phone. Whereas you could easily put two cameras side by side. This also applies to point-and-shoot cameras with larger sensors, which are hard to fit in your pocket. You could again have two sensors in a pocketable camera.

  • Both lenses could be pointed outwards to capture a wider field of view.

  • At larger sensor sizes, it would be cheaper to make two smaller sensors rather than one big sensor. Because a defect in one of the small sensors means that you throw away just that sensors, while you can use the other one, while a defect in a bigger sensor means that you have to throw away the entire sensor. So you get better yields, and therefore lower prices.

  • If each sensor can capture video at 120FPS, then having two sensors permits video capture at 240FPS.

I can see many advantages with having two sensors. Why aren't such devices more common?

I don't think it will add to the cost significantly. For example, the iPhone 5s cameras are estimated to cost only $13, and that's for both the front- and rear-facing cameras together. Even if it cost $50, that would be a small matter on a device costing $400+, like premium compact cameras.

mattdm
  • 143,140
  • 52
  • 417
  • 741
Kartick Vaddadi
  • 4,726
  • 10
  • 52
  • 94
  • 7
    One word: Parallax. http://en.wikipedia.org/wiki/Parallax – Michael C Jan 21 '14 at 05:01
  • 4
    iPhones don't have a camera. They have a sort of image capturing device, but that is not a camera. When talking about dynamic range and low-light abilities of a sensor, then do not talk about iPhones in the same context. – Esa Paulasto Jan 21 '14 at 05:08
  • 12
    That's a silly comment, Esa. It's a camera, and an excellent one. – Kartick Vaddadi Jan 21 '14 at 05:19
  • 1
    Check out JAIs 2CCD cameras. Some do RGB + NIR and some do 2 exposure HDR. FluxData has 3CCD cameras for up to 9 wavelengths. And PtGrey has cameras with 2-3 sensor/lens sets for stereo imaging. There are other brands, too. So it is pretty common. – Michael Nielsen Jan 21 '14 at 08:37
  • 2
    Yes, my NEX can't take timelapse video at 4k, or shoot video at 120fps, or automatically and wirelessly transfer my photos to my laptop / Dropbox, and so forth. The NEX also runs out of battery after a few hundred photos, while the iPhone was able to shoot a timelapse of 2000 photos and then still have 20% battery remaining... when I tap to focus, the NEX adjusts only the focus and not the exposure, with the result that the subject of your shot can be under-exposed and really dark. Oh, and my NEX doesn't fit in my pocket. – Kartick Vaddadi Jan 21 '14 at 09:31
  • Michael, when the two cameras are only a few mm apart, parallax shouldn't be an issue for most photos, should it? It should be a problem only for photos taken of subjects that are extremely close to the camera, right? And can't the software detect those cases and just throw away the image from one of the two cameras? I'm not talking about fancy math needed to merge both images into one, but just detect cases where there's a significant difference and throw away one image. – Kartick Vaddadi Jan 21 '14 at 09:43
  • Michael, thanks for the information about the CCDs with multiple cameras. They seem like specialist cameras, while I was wondering about consumer cameras. – Kartick Vaddadi Jan 21 '14 at 09:44

4 Answers4

7

While a multi-sensor setup might be nice to fancy about, real people (even when genuinely interested, like you) are much more likely to spend their money on an iPhone instead of Fuji FinePix Real 3D. Little interest in market for already existing devices means less incentive for engineering more of them.

From costs side, even if adding another sensor would be only $50 (there's a significant difference between factory-installment cost and cost of final product), a competitor could use this $50 for significantly improving optics; it's going to be $100+ on a double-sensor camera to catch up with that, where the optics have to be added and synchronized to each other. And while hardware might be cheap, the need to redesign camera interface (both hardware and software) for a small market is costly.

Most of the applications listed in the question are impossible to implement straightforward due to parallax, so some stitching (with resulting artifacts) should happen in software, not going to look good in reviews. 3D was a hype in consumer electronics a few years ago, but inconvenience to user and uproar of mobile tech has pushed that back to "toy technology" status (where it IMHO belongs).

To address some of the ideas individually,

  • higher dynamic range, focus bracketing, noise reduction, panoramas - already (with some limitations) available using multiple shots from single sensor;
  • panorama cameras, high FPS video devices and enhanced dynamic range cameras (Fuji again, SuperCCD) have been around, loved by a few enthusiasts, but not wildly popular among general public;
  • skipping the second sensor (and its optics) entirely is even cheaper than having to throw it away only sometimes;
  • double FPS means double processing power and data throughput.

Also, two cameras consume double energy.

So, it comes down to

  • nothing groundbreaking
  • with its drawbacks
  • for a significant cost
  • appealing to few enthusiasts.
Imre
  • 31,966
  • 11
  • 107
  • 177
  • Thanks, Imre. I've accepted your answer, because I think it answers my questions. Just a couple of clarifications, please: shouldn't parallax be a problem only in some cases (of extremely close subjects, like macro), assuming the two cameras are a few mm apart, as they could be on a phone? Secondly, yes, I agree that people are much more likely to spend their money on an iPhone instead of a Fuji FinePix Real 3D. I would do that, too. However, what if the next iPhone (or Galaxy s5 or whatever), has two cameras? In other words, dual cameras needn't be restricted to specialized cameras, right? – Kartick Vaddadi Jan 21 '14 at 09:50
  • 1
    Parallax will be a problem even at any distance because the pixels will rarely if ever line up. The only real use case for 2 cameras is stereoscopic imaging/video and like many of the other questions you've asked, it comes down to a balance between engineering cost (not material cost), market demand and effect on other systems (space on the board to implement them etc). If and when those change in favour of having 2 main cameras then it might happen (as happened with rear facing cameras. – James Snell Jan 21 '14 at 11:21
  • The parallax difference should be around couple of pixels for an object at 3 meters (10 feet). Yes, the dual camera could be added to a popular smartphone. Will the effort be taken just to get ~1 EV extra low light performance (the only application appealing to wider audience)? Sounds unlikely. – Imre Jan 21 '14 at 20:48
  • Understood, and agreed, Imre. But I must point out that I don't agree with the reasoning that none of the applications would be appealing to a wide audience. By that logic, one could claim 120FPS video isn't appealing to a wide audience, but Apple did introduce it in the iPhone 5s. I do think that lower noise, better dynamic range, etc would be broadly useful, not just better low-light performance. In any case, thanks for your explanation. – Kartick Vaddadi Jan 24 '14 at 01:24
6

The fundamental problem is that if you have two cameras, they're pointing at two different things. That's just geometry: a line perpendicular to a plane intersects the plane at one point, not two. Not in Euclidian space or anything like that.

Now, granted, the two-camera approach works out pretty well for Homo Sapiens, and it gives you a super-realistic 3D effect, but most photographers and videographers prize the clarity of a 2-D image more (James Cameron notwithstanding) and it's easier to make use of such an image.

And it's not that you couldn't write software to combine two images, but it would be a real challenge to write software that combines two images with really high quality under all or most photography circumstances... using the limited processing power available in a mobile device, in a manner that doesn't interrupt your photo-shoot with a 10-second processing delay.

There are other engineering challenges too. For instance:

If each sensor can capture video at 120FPS, then having two sensors permits video capture at 240FPS.

Not so fast. You could stagger the captures from one sensor to another with enough engineering, but you'd also need a faster bus from the sensors to memory and the encoding hardware. And, of course, both cameras are pointing at subtly different pictures, again, so you might get some freaky flickering between frames.

Both lenses could be pointed outwards to capture a wider field of view.

That would kind of make it hard for them to be used to increase each other's resolution in your Cortex Camera clone, wouldn't it?

And finally, few people actually want what you're asking for. People who actually want lower noise from their camera? They buy a dedicated camera with a bigger lens putting more light on a bigger sensor. Professionals who have a need for super-high-FPS shooting? They shell out the big bucks for a capable video device. James Cameron wants to shoot 3D? Likewise.

The iPhone is not marketed at people who use these capabilities, yet developing for them is an expensive endeavor involving Apple paying many engineers lots of money. It's not even clear that such an investment would break even. Apple would rather develop software that increases their ability to sell cell phones to more people. If the iPhone ever gets two cameras on the same side, it will be a way to record 3D video.

  • Your comment "If the iPhone ever gets two cameras on the same side, it will be a way to record 3D video." is interesting. Regarding your other points, yes, I understand that having the lenses point outwards means that fusing of images Cortex Camera-style won't work. And, yes, you do need a faster bus and enough horsepower to encode 240FPS HD video. Regarding the "few people want what you're asking for" comment, you could say the same thing about mirrorless cameras a few years back, about 120 FPS capture in a phone (before the iPhone 5s came out), and so forth :) – Kartick Vaddadi Jan 21 '14 at 05:26
  • In other words, saying, "It's not available in the market, therefore I conclude that there's no need for it" is a mistake. – Kartick Vaddadi Jan 21 '14 at 05:27
  • 9
    @KartickVaddadi - You know what? I'm getting the vibe that you sort of posted this question as a rhetorical question, one that you didn't want answered, to make yourself feel good about being smarter than the existing phone manufacturers. But now that someone's actually answering it you don't like the answer. Well, bully for you! Maybe you should take your superior product-market expertise and get Apple or Google/Motorola to hire you as a product manager so you can turn the industry on its head with franken-cameras? –  Jan 21 '14 at 05:50
  • In the meantime, though, compare the engineering expenses necessary to go to 120fps. Things that said expense really didn't include: a massive re-architecture of the camera subsystem to include multi-sensor image integration process doing crazy math in an attempt to deal with lens misalignment which could end up making things worse and blurrier if things were slightly off. –  Jan 21 '14 at 06:06
  • I have better things to do than posting a question to make myself feel better. Do you expect people to blindly accept your answer whether or not it makes sense to them? I replied with specific points that you could have discussed, till we understood each other, but it seems like you're more interested in jumping to conclusions about someone's motivations and/or arguing. So let's bring this discussion to a close and not waste each other's time any more. – Kartick Vaddadi Jan 21 '14 at 09:38
1

The fundamental problem is lack of mass produced chips to do the processing required for the things you're requesting.

Solving the parallax problem is software is possible, especially when the error is small, as is the case with cameras a few mm apart. The problem is phones are made of generic mass produced consumer chips that cost a few dollars each. You state the iPhone camera costs less than $13, say it's $10. Then including an extra camera would cost $10. But then the chip to merge the images correcting for parallax might cost an awful lot more than that because you would have to amortize the cost of developing the chip. And if you decided to use a general purpose CPU for that, well there goes your 240fps and no-one wants to drain the phone battery for the sake of doubling the field of view or dynamic range.

The iPhone photographer isn't discerning enough to bear the cost. And those who are will be using bridge cameras with larger lenses that can't be placed a few mm apart, so the parallax problem comes back with a vengence, never mind the fact that large lenses and sensors are much more expensive!

Matt Grum
  • 118,892
  • 5
  • 274
  • 436
0

You are comparing apples and oranges. The sensor on the iPhone 5 isn't the same sensor that is going to be in a $400 point and shoot. Also, shooting from multiple sensors isn't as simple as you think it is. There is a slight difference in angle from the two different points of view (this is why we can see in 3d) that would render it unsuitable for most uses.

Most of the things you mention as advantages could be accomplished more easily, cheaply and effectively by simply increasing the size of the sensor by the same amount.

AJ Henderson
  • 34,864
  • 5
  • 54
  • 91
  • Sensors are definitely one of the limiting factors for framerates. A sensor may be able to do 1/4000s electronically, but that doesn't mean it's capable of 4000 fps, it may take 1/50s to read out each frame line by line, even if capturing it took 1/4000s. – Matt Grum Jan 21 '14 at 14:37
  • Reading out values may be simple but it's still time consuming, due to the staggering number of pixels (many millions) that have to transfer their charge and have it converted to a digital value. Image processing can be parallelised very effectively using an ASIC, parallelising readout requires more circuitry on chip which reduces the fill fraction and thus light sensitivity. It's thus much easier to increase the image processing throughput of a camera than it is to increase the sensor readout speed. – Matt Grum Jan 21 '14 at 15:07
  • AJ, I'm not comparing iPhones with $400 point-and-shoots. I was instead saying that my suggestion would be applicable to both categories of devices, which are thickness-constrained and can't therefore use larger sensors as you suggest. Pocketable cameras are an important segment of the market, the majority of it, in fact. – Kartick Vaddadi Jan 24 '14 at 01:31
  • @KartickVaddadi - why does a larger sensor have to be a thicker sensor? Surface area is what matters and that could be on one single wafer or on two different ones. – AJ Henderson Jan 24 '14 at 01:46
  • Maybe the optics have to be correspondingly bigger? I didn't say a thicker sensor, but a thicker camera. I haven't seen a pocketable APS-C camera. Even the RX100, which has a 1-inch sensor, is not really pocketable. – Kartick Vaddadi Jan 24 '14 at 05:11
  • The amount of glass is proportional to the surface area. I don't think it matters how you double it. If anything, I'd think that having two sensors would take more glass than having one sensor that has the combined area of the two sensors. We're not talking about APS-c sensors here, we're talking about sensors that are still quite small. – AJ Henderson Jan 24 '14 at 05:47