A new technique by a group of computer scientists at Virginia Tech, National Tsing Hua University, and Facebook can turn regular photos into 3D-style pictures using a technique that extrapolates the backgrounds of still photos.

From the abstract:

We propose a method for converting a single RGB-D input image into a 3D photo, i.e., a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that iteratively synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show fewer artifacts when compared with the state-of-the-arts.

The system basically looks at a photo and finds the items in the foreground. It then assesses what might appear behind a figure in the foreground, allowing for a 3D-style effect that can look similar to the Ken Burns effect or something like the newspaper images in Harry Potter.

Ultimately this is a gimmick but given the power of modern phones it could be coming to a photo sharing app near you.

By John Biggs

John Biggs is an entrepreneur, consultant, writer, and maker. He spent fifteen years as an editor for Gizmodo, CrunchGear, and TechCrunch and has a deep background in hardware startups, 3D printing, and blockchain. His work has appeared in Men’s Health, Wired, and the New York Times.

Leave a Reply

Your email address will not be published. Required fields are marked *