2

To be clear I'm not referring to any processing stage but just hardware. What is preventing me from sampling it at a higher rate? Will it burn out or just start giving me garbage?

What is the reason that a chip capable of taking 20MP images can't take 20MP video? Is it the rate at which the pixel array transfers data? Exposure rate? Electrical noise?

What is preventing a 20MP sensor from taking say, 240p extremely high FPS video?

Also, do you usually have control over being able to sample from the pixel array or are you forced to do it in predetermined patterns? Can you for example just sample from half the array 2x faster? Can that half be random?

dingrite
  • 123
  • 3

2 Answers2

4

What is preventing me from sampling it at a higher rate?

Every discrete step in the operation takes time. Changing the value of inputs to a logic gate takes time. Once those inputs have stabilized, getting a stable output from the gate takes time. Placing a new value on a data bus takes time, and then signaling the other device(s) that the data is ready takes time.

In a complicated device like an image sensor, you might have hundreds or thousands of signals changing in parallel, and interacting with each other, and so there's a need for synchronization signal -- something that starts all those changes at once, and then allows enough time for the analog signals to stabilize in all the affected parts. That synchronization comes from a clock signal, which is sort of like the heartbeat of the system. The clock's period -- the time between one tick and the next -- has to be long enough to accommodate the slowest component.

Will it burn out or just start giving me garbage?

Garbage, most likely. The "values" that we're talking about here are voltages. In digital electronics, the digital value of a given signal can only be 0 or 1, on or off. But those values are represented by analog voltages, for example voltages between 0.0V and 0.5V might represent a logical 0 or false value, and voltages between 2.4V and 3.3V might represent a 1 or true value. If you try to read a signal before it has stabilized, though, it might be anywhere between 0V and 3.3V.

However, there is also a risk of permanent damage to a device if you try to run it so fast that it can't dissipate the power it's using. Heat is the enemy of electronics -- materials generally have lower resistance and work faster when they're cool.

What is the reason that a chip capable of taking 20MP images can't take 20MP video?

There are a number of factors. First, the sensor's read speed would have to be less than the reciprocal of the frame rate: if you want 30 fps video, you'd have to be able to read the entire 20Mp sensor in less than 1/30s. Second, the camera would also need a data bus that could support that rate. I don't actually know what the data bus speed is in a modern DSLR, but for 30 fps 20Mp data, you'd need a 24-bit bus running around 30*20,000,000=60MHz, which is probably faster than you'd find in most cameras. Third, the camera itself needs some time to process the data -- even RAW files are processed and compressed to some degree. Fourth, you'd need a place to store all that data. Even a large SD or CF card wouldn't hold more than a few minutes' worth of 20Mp video. Fifth, the write speed of flash memory is already a limiting factor; cameras typically cache images while they're being written to storage, and most cameras only have enough memory to cache a a few dozen full-resolution RAW images taken in burst mode. Sixth, most people have no practical way to view 20Mp video -- even 4K UHD video is only about 8.3Mp per frame. There's an 8K standard, so it's clearly not impossible to read data from a high resolution sensor quickly, but for all the reasons listed above, it's not surprising that you don't find that capability in your current DSLR.

Also, do you usually have control over being able to sample from the pixel array or are you forced to do it in predetermined patterns? Can you for example just sample from half the array 2x faster? Can that half be random?

I haven't really looked into reading individual pixels from an image sensor, but you'd have to assume that doing so would slow things down a lot because you'd have to send the pixel coordinates of every pixel you wanted to read to the sensor. Reading data from the sensor in a pattern that the sensor already knows about would save a lot of work (and probably time). If you look at the data sheet for an image sensor (this one for example), you'll see that there are a lot of registers that control what data to read and how that data should be formatted, so there's certainly some control, but reading the entire image in random order probably isn't practical.


From a comment:

CMOS sensors are usually able to take higher FPS video at lower resolutions, do you know what enables them to do so?...Does the CPU downsample or the sensor itself does that?

Looking at the datasheet linked above might help. You can glean a lot even if you don't fully understand it. For example, we can tell just from the specs that the sensor itself must limit the data it sends for different formats, because it can transfer more frames at lower resolutions. Among the key features is this:

maximum image transfer rate:
QSXGA (2592x1944): 15 fps 1080p: 30 fps
1280x960: 45 fps
720p: 60 fps
VGA (640x480): 90 fps
QVGA (320x240): 120 fps

Digging in, you'll find information about subsampling and windowing. Subsampling is where the sensor outputs only some of the pixels, like every second pixel or every fourth pixel. This effectively reduces the resolution of the sensor. Windowing limits the rectangular region of the sensor from which pixels are output. These functions (and many more) are performed by the sensor itself and are configurable.

Look at the datasheets for other sensors, too. I found one for an ON Semiconductor sensor(PDF) that's quite informative. You'll find more information here about subsampling and windowing, and also a number of timing diagrams that give some idea of the sync issues I talked about above.

Caleb
  • 31,682
  • 6
  • 65
  • 120
  • If you want to do interesting forms of sub-sampling, CCDs are much more useful, so there's no need to implement it on CMOS – Chris H Mar 16 '17 at 21:42
  • "...most cameras only have enough memory to cache a dozen or fewer full-resolution images taken in burst mode." ???? This is not true in 2017. – Michael C Mar 16 '17 at 23:56
  • 1
    @MichaelClark Depends on size, of course. A Canon 5D mk IV can buffer 15-20 RAW images, IIRC. Wouldn't much matter if it was 50 or 100, though -- the point is that writing to flash storage is a bottleneck. – Caleb Mar 17 '17 at 00:20
  • I don't see the qualifier raw in your answer. A JPEG image can also be full-resolution. There are a plenty of cameras now that can shoot JPEG at full burst rate until the memory card is filled or the battery dies. – Michael C Mar 17 '17 at 00:23
  • @MichaelClark I think you're missing the point. – Caleb Mar 17 '17 at 00:25
  • No I'm not missing the point. You didn't qualify a statement that needs to be qualified to be true. – Michael C Mar 17 '17 at 00:27
  • CMOS sensors are usually able to take higher FPS video at lower resolutions, do you know what enables them to do so? I'm primarily interested in the hardware pathway of that process. Does the CPU downsample or the sensor itself does that? If so, how? – dingrite Mar 17 '17 at 00:27
  • The 7D Mark II can go about 30 raw images at 10 fps before the buffer bogs down for anyone who is counting. – Michael C Mar 17 '17 at 00:27
  • @dingrite It seems to be more an issue of processing power in the CPU and write speed to the memory card than sensor readout limitations. The bandwidth between sensor and buffer memory (where readout data is held after digitization but prior to processing by the CPU) is probably not designed to be much wider than needed to feed the CPU at its maximum data rate, but it probably very easily could be if the CPU were faster. The problem with high power CPUs is energy consumption, which of course impacts battery life... It's all one system designed to work as efficiently as possible as a system. – Michael C Mar 17 '17 at 00:32
  • @dingrite Updated my answer above to include your comment. – Caleb Mar 17 '17 at 01:47
  • The sensor's read speed and the data bus speed are considerably faster than realistic frame rates. You're pulling data from the full sensor every time you use live view mode. The reason a camera can't take 20 MP video is entirely that video must be compressed (or else the flash cards wouldn't be fast enough), and the compressor hardware can't handle those sizes. That said, with alternative firmware like magic lantern, many cameras can record RAW (full-res) video for a few seconds at a time. – dgatwood Mar 17 '17 at 21:39
  • @dgatwood the sensors I've looked at all have sub sampling capability that let's you read lower resolution (therefore less data) for things like live view. – Caleb Mar 17 '17 at 22:48
  • 1
    Heh. You're right. My bad. They're doing subsampling in hardware. They're polling the full area of the sensor, but not polling all the pixels. I think the closest they've gotten is ~3.5K video at 24 fps on a 5D Mark III—still way more than 1080p, but nowhere near full res. – dgatwood Mar 17 '17 at 23:40
  • @dgatwood when you go into the camera view mode on a device like a smartphone the IPS displays a raw feed at native screen resolution and 60fps? And if not can a third party app actually do that? – dingrite Mar 20 '17 at 02:12
1

You're really asking a question that belongs in Solid-State-Engineering.SE if there were such a site. I've seen chips with 64 parallel readouts so that each chain can run at a relatively slow speed (reading 1/64th the array) while maintaining a high overall frame rate. Then there are concerns such as pixel size: the larger the pixel,the more light it can collect, but the larger its capacitance (in general), which leads to other speed problems.
And so on :-( . For all that, there are specialty video cameras which run at thousands of FPS continuously. They not only have specialty sensors but also ultrahigh-speed busses to superfast memory.

Carl Witthoft
  • 1,887
  • 12
  • 11