"Last week, Mona Lisa smiled. A big, wide smile, followed by what appeared to be a laugh and the silent mouthing of words that could only be an answer to the mystery that had beguiled her viewers for centuries.
A great many people were unnerved.
Mona’s “living portrait,” along with likenesses of Marilyn Monroe, Salvador Dali, and others, demonstrated the latest technology in deepfakes—seemingly realistic video or audio generated using machine learning. Developed by researchers at Samsung’s AI lab in Moscow, the portraits display a new method to create credible videos from a single image. With just a few photographs of real faces, the results improve dramatically, producing what the authors describe as “photorealistic talking heads.” The researchers (creepily) call the result “puppeteering,” a reference to how invisible strings seem to manipulate the targeted face. And yes, it could, in theory, be used to animate your Facebook profile photo. But don’t freak out about having strings maliciously pulling your visage anytime soon.
“Nothing suggests to me that you’ll just turnkey use this for generating deepfakes at home. Not in the short-term, medium-term, or even the long-term,” says Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative. The reasons have to do with the high costs and technical know-how of creating quality fakes—barriers that aren’t going away anytime soon."