Idea: plenoptic assist for traditional DSLRs
This is cool: a company called Lytro is taking orders for its plenoptic (or ‘light field’) consumer photo camera, which it expects to ship in 2012.
A plenoptic camera swaps spatial (2D) information for distance information. See it as a grid of thousands of miniscule-resolution cameras all pointing straight ahead, with software combining the miniature photos back into a single exposure. (You used to have something similar in analog called a Lomo camera, but since that lacked the sophisticated software required to make something of the extra information it recorded, it was basically something only used for the cool effects.)
The extra information can be used to focus on a specific plane or object, to remove objects or visual artefacts, to create stereo images and many, many things more.
As they say, a grainy, shaky Youtube video with an idiot acting the straight man can say more than a thousand words:
(See also this for a demonstration of more applications.)
But because you’re swapping different types of information, you also lose a lot of information. I read somewhere for instance that the Lytro uses a 20 megapixel light sensitive chip to get to a 1 megapixel image. The result is that this type of camera will be mostly useful for photography where you cannot or will not control the setting. The Lytro will be used for snap shots, where otherwise you would use a regular (read: slow) pocket camera and miss the funny face your toddler pulls. Other uses of similar cameras would be surveillance (where beforehand you don’t know which details are important), or medical imaging where you want to separate planes of say tissues or cells.
All other types of photography have great use for the extra information plenoptic photography has to offer, but cannot afford to give up all that spatial information (i.e. resolution).
So I was thinking: what if you put both a regular sensor and a micro lens array with a dedicated sensor in the same camera? Now, you would not want them to occupy the same space, but as it happens the ‘camera’ (Latin for room) has plenty of space, and many professional cameras use a mirror to reflect the incoming light to a viewfinder. If you’re building a mirror camera using an electronic finder, you could put the micro lens array in front of the viewfinder’s light sensitive chip.
This method does of course also have its draw backs in the form of trade-offs. You could not use this for video for instance, or anything else involving most forms of motion. What my idea solves is mostly an engineering problem. It transforms a problem of unknown variables to one of mostly known variables, which means throwing a lot less cash at the designing the camera and allowing a manufacturer to be early to market.
Leave a Reply