We've built an object-based weather forecast and a prototype implementation of the back-end stack needed to deliver these types of experiences to a wide array of different devices.
Object-based media is essential if we want to be able to harness the full power of an IP based broadcasting system to provide a consistently excellent experience across all devices. Over the last few years, we have been putting in a great deal of work to ensure the interfaces, data models and architecture of this new IP based broadcasting system are fit for purpose, ensuring that we can continue to produce traditional content using this technology as well as paving the way for genuinely new experiences.
As well as being able to adapt to screen size and device type, object-based media would also be able to adapt to the preferences of the viewer. Larger on-screen graphics for the visually impaired or sign presenters in place of regular presenters to assist the hard of hearing are two obvious examples which could improve the viewing experience for many people, as well as more mainstream features such as tie-ins to personal or other third party data services.
To illustrate some of these concepts, we have built an object-based weather forecast and an example implementation of the back-end stack needed to deliver it to a varying array of client device types. Watch the video at the top of the page to see it in action.
A weather forecast lends itself very well to this type of demonstration as it is a universally understood format and can be easily separated into its constituent components. To construct the experience, we took the recordings of Dianne and Clive, our presenters, delivering the forecast in front of a greenscreen. This was then time aligned with the data stream containing the icons, the animation data to drive the WebGL globe and the subtitles and delivered as a package of related assets for rendering in the playback client. We also built a rendering system for flattening the experience to liner video for delivery to clients unable to run the webapp.
By recognising the device used to request the experience, a number of decisions can be made to tailor the media served back to the client. For example, a TV that is incapable of executing code or low-powered mobile device that can only play a basic video stream is recognised as such, and is served a simple, traditional liner stream. At the same time, a device that is requesting the experience through a modern web browser can be served the full-fat reactive experience with all the enhanced client-side features.
Even though there is quite a lot going on in the background, the experience, as far as the viewer is concerned, is just like viewing a regular video. After making the initial request, the page served back just looks very similar any standard media player window familiar to anyone who has used the BBC iPlayer.
What is different here, is that the whole experience is being built from all its different elements 'on-the-fly', and we can demonstrate some neat tricks by dynamically arranging this media (represented by the different blocks on the timeline), as it plays into the rendering engine.
In the future, we'd imagine many of these parameters would be set automatically based on the previously stated preferences of the user, but in this case, explicit control makes for a more effective demonstration.
For example, here are some of the accessibility features, enabled by changing the styling of the background map, enabling the subtitles (which rearranged the on-screen graphics to avoid overlap) and replacing the regular presenter with a signer.
As another example, the layout of the media can adapt to the screen size of the client device:
Regular TV/desktop display
Mobile landscape (with slightly larger on-screen text)
Mobile portrait with re-positioned subtitles
We hope this has give you an idea of some of the benefits object-based broadcasting could bring, even to traditional programme types. Over the next few months, we will be building on this system and developing these ideas to allow for more complex compositions to be created, especially in the way these programmes are described and assembled in web browsers. See Matthew Shotton's blog post about our HTML5 video compositor for more detail.