Compressed sensing is an new computational technique to extract large amounts of information from a signal. Researchers from Rice University, for example, have built a camera that can generate 2D-images using only a single light sensor (‘pixel’) instead of the millions of pixels in the sensor of a conventional camera.
This compressed sensing technology is rather inefficient for forming images: such a single-pixel camera needs to take thousands of pictures to produce a single, reasonably sharp image. Researchers from the MIT Media Lab however, have developed a new technique that makes image acquisition using compressed sensing fifty times more efficient. In the example of the single-pixel camera that means that the number of exposures can be reduces to several tens.
One intriguing aspect of compressed sensing is that no lens is required – again in contrast with a conventional camera. That makes this technique also particularly interesting for applications at wavelengths outside of the visible spectrum.
In compressed sensing, use is made of the time differences between the reflected light waves from the object to be imaged. In addition, the light that strikes the sensor has a pattern – as if it passed through a checkerboard with irregular positioned transparent and opaque fields. This could be obtained with a filter or using a micro-mirror array where some mirrors are directed towards the sensor and others are not.
The sensor each time measures only the cumulative intensity of the incoming light. But when this measurement is repeated often enough, each time with a different pattern, then the software can derive the intensity of the light that is reflected from different points of the subject.