prostheticknowledge:

Real-time Expression Transfer for Facial...







prostheticknowledge:



Real-time Expression Transfer for Facial Reenactment

Graphics research paper from Stanford University demonstrates a method of transferring the expressions of one person’s face to the other in realtime using depth cameras:


We present a method for the real-time transfer of facial expressions
from an actor in a source video to an actor in a target video, thus
enabling the ad-hoc control of the facial expressions of the target
actor. The novelty of our approach lies in the transfer and
photo-realistic re-rendering of facial deformations and detail into the
target video in a way that the newly-synthesized expressions are
virtually indistinguishable from a real video. To achieve this, we
accurately capture the facial performances of the source and target
subjects in real-time using a commodity RGB-D sensor. For each frame, we
jointly fit a parametric model for identity, expression, and skin
reflectance to the input color and depth data, and also reconstruct the
scene lighting. For expression transfer, we compute the difference
between the source and target expressions in parameter space, and modify
the target parameters to match the source expressions. A major
challenge is the convincing re-rendering of the synthesized target face
into the corresponding video stream. This requires a careful
consideration of the lighting and shading design, which both must
correspond to the real-world environment. We demonstrate our method in a
live setup, where we modify a video conference feed such that the
facial expressions of a different person (e.g., translator) are matched
in real-time.


More Here


 •  0 comments  •  flag
Share on Twitter
Published on October 27, 2015 14:33
No comments have been added yet.