Posted by Matthew Shotton on , last updated
As we continue our research into the implications of object based broadcasting our understanding of the requirements for client side rendering deepens. The HTML5-Video-Compositor served us well as the rendering engine for BBC R&D projects such as Visual Perceptive Media, SqueezeBox, and Forecaster. Through these projects we came to understand a more general rendering pipeline was required that could overcome some of the limitations of that library. The VideoContext provides the same sequencing and playback functionality, while also allowing far more complex rendering pipelines to be built as well as better support for timeline-based callbacks.
We built this library taking inspiration from the design of the WebAudio API while also incorporating some higher level convenience functions. We hope this provides familiarity for people picking up this library for the first time who have had previous experience in the web audio world. The VideoContext uses a graph based rendering pipeline, with video sources, effects and processing blocks represented as software objects which can be connected, disconnected, created and removed in real time during playback.
The browser provides a number of individual technologies that can be used for creating interactive video content. The VideoContext brings together WebGL, HTML5 Video, and a number of new DOM features provided by the HTML5 spec such as “requestAnimationFrame".
This library takes the same pragmatic approach to timing and synchronization that the HTML5-Video-Compositor took. With no guarantees of synchronization between multiple HTML5 video elements we take an approach of doing the best we can with what the browser provides. All the major timing functionality is handled in a requestAnimationFrame loop and the underlying video elements have callbacks listening for stalled states which causes the whole VideoContext to pause. In practice these techniques make the VideoContext usable in environments with good network speeds (the actual performance will depend on processing graph complexity, network bandwidth, and video resolution).
At current the library is tested and works on newer builds of both Chrome and Firefox on desktop, and with some issues on Safari. Due to several factors the library isn't fully functional on any mobile platform. This is in part due to the requirement for a human interaction to happen with a video element before it can be controlled programmatically.
We're using this library internally to provide client side rendering for our work researching and developing a streamable description for media composition. We're also using it as the rendering engine for a number of demos and experiences we hope to share in the future.
We're making this library available as open source to lower the barrier of entry for everyone who wants to create new kinds of content experience. We're excited to see what you will make.