Helping machines play with programmes
As part of our work on developing BBC Programmes we have been looking at how we can make the data available for other development teams outside the BBC. And at last weeks XTech Nick and myself presented a paper outlining our work to date and some of our future plans.
We have been following the Linked Data approach - namely thinking of URIs as more than just locations for documents. Instead using them to identify anything, from a particular person to a particular programme. These resources in-turn have representations, which can be machine-processable (through the use of RDF, Microformats, RDFa, etc.), and these representations can hold links towards further web resources, allowing agents to jump from one dataset to another.
To date our work on Programmes has been focused on providing persistent URLs to HTML documents of our primary objects : episodes, series and programme brands. However, we have also been looking at how we can make this data available to machines. So what does this look like?
For starters the HTML is marked up with hCalendar and hCard microformats to help the machine identify schedules and cast and crew information. Semantic web with a small 's' if you will. But more interesting is our work on alternate data serializations - big 'S' Semantic Web.
As discussed in our presentation we have developed an ontology to describe programmes; and we are now working to make the data available in a variety of formats: XML, Atom, RSS 2, JSON, YAML, RDF. Although, currently we only have XML, YAML and JSON views for schedules i.e. the following urls:
To access these add .xml .yaml or .json to the end of the url. For example the xml serialization for the Radio 1 schedule is:
The JSON serialization for today's Radio 4 schedule is:
We've also done some work on the RDF representation. These aren't live yet but can be accessed on a development server at : http://bbc-programmes.dyndns.org. We would love hear what you think about what we've done before we make the service live.
The first message displays the metadata coming over XMPP.
The second message grabs the brand short synopsis from the linked data from the BBC Programmes linked data RDF.
The third message grabs the radio network's first air date from DBpedia.
What's also nice about this is the additional data being pulled in from DBpedia. The information about when the service started broadcasting is not coming from a BBC database - it's coming from BDpedia - we are able to link DBpedia to the Service within the Programmes Ontology and thereby pull in additional data from an external source.
Anyway I hope you like what we've done - but in any case all comments are most welcome.
Tom has also written a review of XTech on the BBC Internet blog.