How To Build a Virtual Tom Hanks [Video]

Actor Tom Hanks has played a wide range of characters over the years, yet we always recognize him as Tom Hanks. Why? Is it his appearance? His mannerisms? The way he moves?

A new study gets us closer to an answer by showing that it’s possible for machine learning algorithms to capture the “persona” and create a digital model of a well-photographed person like Tom Hanks from the vast number of images of them available on the Internet.

With enough visual data to mine, the algorithms can also animate the digital model of Tom Hanks to deliver speeches that the real actor never performed.


Researchers reconstructed 3D models of celebrities such as Tom Hanks from large internet photo collections. The models can be controlled by photos or videos of another person. (Credit: University of Washington)

“One answer to what makes Tom Hanks look like Tom Hanks can be demonstrated with a computer system that imitates what Tom Hanks will do,” says lead author Supasorn Suwajanakorn, a graduate student in computer science and engineering at the University of Washington.

The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modeling, and puppeteering that have been developed over the last five years by a research group led by Ira Kemelmacher-Shlizerman, assistant professor of computer science and engineering.

The team’s latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else—for instance, mapping former president George W. Bush’s mannerisms onto the faces of other politicians and celebrities.

See the video on the next page for a stunning example of that…