We have just finished leading a three-year collaboration with other European institutions in the EU-funded RE@CT project. The aim of this project was to develop new methods of capturing a performance of an actor as a 3D model, directly from video, including all the subtle aspects of actor appearance and movement. Current ‘motion capture’ technology involves sticking special markers on the performer and surrounding them with cameras that detect just these markers. From the 3D marker positions, the motion capture system can deduce the motion of the actor’s skeleton, and use this to control the motion of a 3D model rendered using computer graphics technology. The detailed appearance of the actor is lost in this process, and has to be re-created by the artist making the model. Often the results won’t look lifelike, as generating a truly lifelike model (including things like folds in clothing and swaying hair) takes lots of time and money. This may be OK for big-budget movies, or where the virtual character looks completely different from the actor being captured (e.g. an actor playing an alien) but is impractical for applications like TV production, particularly where the aim is to reproduce the exact appearance of the actual actor or presenter.
We recently had the opportunity to test some of this technology with our colleagues in BBC Knowledge and Learning, resulting in a public demonstration that has just launched on the BBC’s ‘Taster’ website. This is helping us to explore and evaluate how 3D animations and interactive experiences can be used to supplement more conventional video for applications in learning, such as might feature in an iWonder guide.
We used our multi-camera blue-screen capture studio in London to record a 1.5-minute dance performance by ballerina Caroline Crawley, specially choreographed for us by Jack Thorpe-Baker to include a range of moves and gestures that are commonly-used in ballet. Using a blue-screen background makes it easier for the capture system to create a 3D sequence, by helping it work out which parts of each image contain the dancer. We made a video showing some shots from behind the scenes, and took a set of photos. We worked with our project partners to create an animated model of Caroline for the complete dance from the videos captured by the 9 HD cameras. We also edited together a video of the dance from the HD camera recordings, and replaced the blue background with a simple environment containing a wooden floor and some benches, with dramatic lighting. Having a 3D model of the dancer allowed us to render realistic shadows on the floor, but the real benefit of having the model was that it let the viewer switch from the edited video to a ‘free viewpoint’ mode so they can examine key moments in the dance in more detail. We worked with our editorial colleagues in BBC Knowledge and Learning to add annotations to the modelled dancer to explain key aspects of the dance. We created a web application that integrated the video and 3D model, exactly aligning the initial view of the 3D model with the appropriate camera view at each ‘explore’ point. Note that due to limitations in current browser technology, this won’t work on some browsers.
We think that this approach of pausing at a key moment in a video to allow the viewer to explore a 3D model of the scene may have applications in many kinds of content, by allowing the user to examine a ‘freeze frame’ moment from multiple angles, with the option of explanatory overlays or additional graphics. We are already working on a few other applications for this that we hope to be able to share in the near future. Having a 3D model for only a few key moments also limits the amount of extra data that needs to be downloaded: if we had delivered the full dancer sequence as a 3D model it would have taken a very long time to download over a typical domestic broadband connection.
We are also exploring other forms of interactive application where delivering full 3D animated sequences would be useful. One aspect that the RE@CT project looked at was how to capture several sequences of moves that can be edited together seamlessly to create continuous movement, for example to create a sequence containing walking, running, jumping and turning from individual sequences of these moves. We worked with our RE@CT partners to develop a prototype application that lets a user choreograph their own dance by concatenating a selection of captured 3D dance sequences, which are replayed as 3D in the browser. We hope to release this publicly at some point.