MIT scientists hack Kinect, create reflection-less camera

Scientists from the Massachusetts Institute of Technology have done something interesting with Microsoft’s motion sensing peripheral for the Xbox, the Kinect.

MIT’s Media Lab is currently working on a camera that should eliminate the problem of pesky reflections that show up when taking photos. This annoyance happens when camera (or mobile device) users have to shoot through a window; this results in glare or reflections showing up in images. But the Media Lab’s Camera Culture Group is now working on something to change all that. Leveraging the Xbox One’s Kinect sensor and camera’s depth sensor, the researchers were able to create a camera that eliminates the inconvenience of reflections on certain photos.

The MIT team explained that their use of signal processing is what helped them get around the limitations of not being able to pick multiple reflections. This, they add, could mean big things for ultrasound, terahertz imaging, and other forms of “noninvasive imaging technologies.”

“You physically cannot make a camera that picks out multiple reflections,” said first author Ayush Bhandari. “That would mean that you take time slices so fast that (the camera) actually starts to operate at the speed of light, which is technically impossible. So what’s the trick? We use the Fourier transform.” This is a common signal processing method that breaks down signals into smaller “constituent” frequencies.

The researchers modified the Kinect peripheral, inadvertently teaming up with Microsoft Research in the process, and with these modifications, they were able to make the camera use specific light frequencies, and also spot reflections from a variety of depth. This builds on a pre-existing system that beams light onto a person or an object being shot, and gauges the reflected light’s intensity and the time it takes to reflect.

“That information, coupled with knowledge of the number of different reflectors positioned between the camera and the scene of interest, enables the researchers’ algorithms to deduce the phase of the returning light and separate out signals from different depths,” explained MIT in a press release.