Will life-like skin movement rescue digital characters from the uncanny valley once and for all?
By Frederick Blichert | MOTHERBOARD
Last winter’s Rogue One came pretty close to bringing an actor back from the dead. Peter Cushing, who died in 1994, appears in CGI form, „reprising“ his role as Grand Moff Tarkin from the original Star Wars. Cushing’s longtime secretary was overwhelmed when she saw the digital version of the deceased actor.
There’s always something unsettling about these exercises in recreating live-action characters in digital form—they never quite manage to climb out of the uncanny valley, so they end up being a little creepy rather than life-like. That may be about to change. A computer scientist at the University of British Columbia is working to create an algorithm that simulates the movement of skin, one of the last hurdles to creating truly life-like digital characters (still no word on when CGI eyes will stop being so scary).
Dinesh Pai and his team have combined new and old tech to capture how skin folds, stretches, wrinkles, and bounces on the body. The process is a lot like traditional motion capture, but Pai’s focus is more specific. „Typically when people do motion capture, they’re not trying to capture the motion of the skin, they’re trying to capture the motion of the bones and the skeleton of the body,“ he said.