Posted by Jerry Kramskoy on , last updated

It's been a long time since the last series of blogs on Orchestrated Media. Time for a catch-up. Firstly, we've stopped using the term orchestrated media, and instead talk about dual-screen and companionscreen. Dual-screen reflects where things stand currently: the companion service can synchronise against the broadcast content using various technologies. See Steve's blog about that. The BBC's launch of dual screen for Antiques Roadshow is imminent.

Looking ahead, we see the next generation of services allowing a wider set of companion services, where the TV, the companion, and the Web, are inter-communicating, allowing a web site or a companion app to both monitor and control the TV. This gives TV -awareness on web-sites, and web-awareness of TV services. Each of these three domains could be the launch-point for companion screen services, and enage the other two domains as needed. Companion screen pertains to this wider role for the companion device, compared to today. ## Interaction layer APIs or something else?

We strongly believe that something else is needed for engaging companion experiences with the broadcaster's content, where the audience can socialise around the content and interact with it in a variety of ways.

Today, a broadcast channel may have an interactive TV application that runs when that channel is tuned to. Each broadcaster has to provide its own app if they want interactivity. These days, the interaction layer that hosts the TV app also supports HTTP. This scenario means that the TV app could contact a broadcaster's back end service, and inform the companion app of the URI (or pre-configure it) for the same back-end. Thus, while the TV app is running, the companion app can query the back end service for what programme is being watched, and potentially the time offset within that programme, and then provide companion content as suits.

This can serve the purpose of a broadcaster, where there is a companion app dedicated to that programme / channel (ignoring issues around synchronisation latency). We can also imagine that various features within the TV could be exposed through an interactive TV app, and the back end server allows the companion app to access these. See the next figure

information flows via back-end server

information flows via back-end server

Even here though, this model breaks down when the broadcast content is re-distributed over cable or satellite platforms that use different interaction platforms. For example, Freeview and Freesat use MHEG-5, whereas Sky and Virgin don't. This fragmentation is less than ideal.

This of course is where audio watermarking has its strength ... being embedded with the AV content, no interaction layer is involved when the user participates in the experience: the companion app can synchronise against the programme as long as the audio is audible.

Ignoring fragmentation, the fly in the ointment with the back-end server approach is that it can only work when the TV app itself is up and running, and when the user changes channels, the outgoing channel's app stops running. This is bad news for various desirable use-cases around companion screen.

Consider a companion app for socialising about various programs being watched during an evening. This probably involves channel changes. The above model now hits problems ... somehow each broadcaster's TV app would have to agree which back end to contact that the companion app knows about, or the companion app must deal with the additional complexity.

So we have to now consider a different approach, that doesn't rely on the TV app.

This suggests that the companion app needs access to the various TV features, without involving the interaction layer, nor contacting back-end servers.

In other words, TV features need exposing over the home network, and of course, we'd argue that this is achieved in an open manner, using an open API, and standardised through open standards. See next figure:

information flows over home network

information flows over home network

Now if you look further ahead, this same approach can also be used to control the TV, for example, to request content be fetched and played from a particular time offset. From a social TV aspect, a viewer could publish a content-related URI to Twitter, say, and a recipient could use this link to watch the program on the chosen device from the recommended timepoint. Again, with the interactive TV app approach, as long as apps are bound to channels, this would be an impossible scenario to achieve, yet this is somethng that can clearly make content consumption a much more seamless experience.

If such an open API existed, then tying together the Web, the TV and the companion services into a unified experience becomes entirely possible, and deeper audience engagement can occur.

Standards bodies are taking interest in companion screen services, as the industry clearly perceives value to be extracted. To this end, we are contributing towards DVB groups looking at companion screen, from a commercial requirements and technical viewpoints. We are also contributing to the DTG around accessibility and around the connected home.

From a purely personal view, the sooner that we get high utility companion screen apps deployed alongside connected TV, the better. The user experience trying to engage with an on-demand service, based on using the remote key and virtual on-TV keyboard proved one step too far for my patience!

This post is part of the Broadcast and Connected Systems section