In a breakthrough that seems more science fiction than science fact, researchers at MIT have developed a model that can recover “lost dimensions” in images. Translation: it can recreate video from a motion-blurred photograph, and may some day be able to create a 3D scan from a 2D image.

Vinyl Home Room Decor Art Quote Wall Decal Stickers Bedroom Removable Mural DIY

$4.99
End Date: Wednesday Jan-1-2020 22:46:58 PST
Buy It Now for only: $4.99
Buy It Now | Add to watch list

Custom Text Vinyl Lettering Sticker Decal Personalized Window Wall Business Car

$1.99
End Date: Friday Dec-13-2019 23:01:38 PST
Buy It Now for only: $1.99
Buy It Now | Add to watch list

A paper about this breakthrough will be presented at the International Conference on Computer Vision next week, but first author Guha Balakrishnan shared the details with MIT News. Basically, all visual data “collapses” four dimensions (one dimension of time and three dimensions of space) into one or two. Balakrishnan and his team have developed a “visual deprojection” model that can recover/recreate some of this lost information.

The researchers trained a convoluted neural network by feeding it “low-dimensional projections” (i.e. a long exposure made by merging a video into a single image) and their original high-dimensional images (i.e. the actual video). Using this data, the algorithm learns to spot and recreate the patterns it was seeing between the two.

In this case, their trained model was able to recreate 24 frames of a person walking, “down to the position of their legs and the person’s size as they walked toward or away from the camera.”

The researchers want to use this same model to eventually turn 2D images into 3D scans at no additional cost, a project they’re working on in collaboration with researchers at Cornell University. This could have major impacts on medical imaging in poorer parts of the world, where it is much easier, more accessible, and cheaper to capture a 2D X-Ray than a 3D CT scans.

No applications to traditional photography are discussed in detail in the article, but it’s possible this same model would be able to accurately “de-blur” images by recreating a single “frame” of the blurry parts. Imagine capturing a long-exposure’s worth of light, and then compressing that down to a single short frame with no motion blur?

Of course, now we’re just dreaming, but it’s incredible to see how far computational imaging has come, and we’re excited to see what comes next.

(via Engadget)


Image credits: Photo by Alasdair Elmes