
It is fascinating to live in an age where paintings can suddenly start to show signs of life.
The paper published by Samsung AI Center, which you can find here shows that just a single frame is enough for producing a lifelike motion. It’s a new method of applying facial expressions on a source face - making the source face do what the real face does.
Previously we could make a face in one video mimic the face in another in terms of what the person is saying or looking. But most of these algorithms require a tremendous amount of data - several minutes if not hours of video to analyze.
However, this new method developed by Samsung’s Moscow-based researchers, shows that using only a single image of a person’s face, a video of face turning, speaking and making standard expressions can be generated. Even though it is far away from being perfect, it is just the beginning of this journey and opens the door to wide spreading of speculative and provocative videos.
This being said, please note that it works on the face and upper torso only. There is no way to make Mona Lisa to sing or dance. But who knows what is next?