Thursday, May 23, 2013

The Sci-Fi Boys (Documentary 2006)

Legendary all-stars of cinema bring to life the evolution of science-fiction and special effects films from the wild and funny days of B-movies to blockbusters that have captured the world's imagination. This is the story of the Sci-Fi Boys, who started out as kids making amateur movies inspired by Forrest J Ackerman's FAMOUS MONSTERS magazine and grew up to take Hollywood by storm, inventing the art and technology for filming anything the mind can dream.

Wednesday, May 22, 2013

The State of the VFX Industry and where do we go from here

Scott Ross and Scott Squires speak about the Past, Present and Future of the VFX Industry and potential solutions to the myriad of problems facing us all.

Effects Corner - Scott Squires blog

VFX Business - Scott Ross blog

The State of the VFX Industry and where do we go from here from Rick Young on Vimeo.

Monday, May 13, 2013

Saturday, May 11, 2013

1973 Interview with Ray Harryhausen

Chris Kelly interviews special effects wizard Ray Harryhausen back in 1973 on the art of stop motion animation and promotes Columbia Pictures' new film, The Golden Voyage Of Sinbad

Monday, May 6, 2013

ILM - Realtime Facial Animation With On-the-fly Correctives (SIGGRAPH 2013)

More info here: Hao Li -  Realtime Facial Animation With On-the-fly Correctives

 Published on Apr 19, 2013

SIGGRAPH 2013 Paper Video: We introduce a real-time and calibration-free facial performance capture framework based on a sensor with video and depth input. In this framework, we develop an adaptive PCA model using shape correctives that adjust on-the-fly to the actor's expressions through incremental PCA-based learning. Since the fitting of the adaptive model progressively improves during the performance, we do not require an extra capture or training session to build this model. As a result, the system is highly deployable and easy to use: it can faithfully track any individual, starting from just a single face scan of the subject in a neutral pose. Like many real-time methods, we use a linear subspace to cope with incomplete input data and fast motion. To boost the training of our tracking model with reliable samples, we use a well-trained 2D facial feature tracker on the input video and an efficient mesh deformation algorithm to snap the result of the previous step to high frequency details in visible depth map regions. We show that the combination of dense depth maps and texture features around eyes and lips is essential in capturing natural dialogues and nuanced actor-specific emotions. We demonstrate that using an adaptive PCA model not only improves the fitting accuracy for tracking but also increases the expressiveness of the retargeted character.:

Friday, May 3, 2013