C.S. Oliver's Blog, page 66
October 29, 2015
New #amazon #review making me _blush_ #apocalypse...

New #amazon #review making me _blush_ #apocalypse #apocalypsethememoir #zombie #5star #shy (at Stratfood)
October 28, 2015
i know you live me @justinbieber (at City of Stratford - The...
A video posted by chantelle oliver (@apocalypsethememoir) on Oct 28, 2015 at 12:19pm PDT
i know you live me @justinbieber (at City of Stratford - The Municipality)
October 27, 2015
Time for art and wine and cheese in no particular order. (at...

Time for art and wine and cheese in no particular order. (at Milky Whey)
        prostheticknowledge:
Real-time Expression Transfer for Facial...
    
  


Real-time Expression Transfer for Facial ReenactmentGraphics research paper from Stanford University demonstrates a method of transferring the expressions of one person’s face to the other in realtime using depth cameras:
We present a method for the real-time transfer of facial expressions
from an actor in a source video to an actor in a target video, thus
enabling the ad-hoc control of the facial expressions of the target
actor. The novelty of our approach lies in the transfer and
photo-realistic re-rendering of facial deformations and detail into the
target video in a way that the newly-synthesized expressions are
virtually indistinguishable from a real video. To achieve this, we
accurately capture the facial performances of the source and target
subjects in real-time using a commodity RGB-D sensor. For each frame, we
jointly fit a parametric model for identity, expression, and skin
reflectance to the input color and depth data, and also reconstruct the
scene lighting. For expression transfer, we compute the difference
between the source and target expressions in parameter space, and modify
the target parameters to match the source expressions. A major
challenge is the convincing re-rendering of the synthesized target face
into the corresponding video stream. This requires a careful
consideration of the lighting and shading design, which both must
correspond to the real-world environment. We demonstrate our method in a
live setup, where we modify a video conference feed such that the
facial expressions of a different person (e.g., translator) are matched
in real-time.
Look who is here! The road tour continues!! (at Monforte on...

Look who is here! The road tour continues!! (at Monforte on Wellington)





