Archives for April 2010
When is a dataset not a dataset? How many of the now 3241 datasets listed as part of data.gov.uk are easy to open up and play with? How many are tables for computers to analyse, instead of PDF reports for people to read?
The Hacks and Hackers Hackday filled a Channel 4 office with journalists and developers on the final Friday in January. Our aim was to tell new stories with open data. Attendees already had form - the BBC's Open Secrets blogger Martin Rosenbaum, and data journalism teams from the Times, the Guardian, and the
Tom Morris was part of a team that looked into the quality of data.gov.uk. Although data.gov.uk advertises itself as a database of open datasets, many of the entries are actually PDF files. He built a prototype format checker that invites people to go through datasets and record the file format. You can listen to him explaining the checker to me and to the hackday, or reuse the interview under the BBC Backstage License.
On Wednesday February 3rd, he put a completed quality checker online. On that Thursday, the crowd had gone through data.gov.uk and marked up all of the datasets.
Tom posted his initial breakdown to the data.gov.uk community on March 20th:
Sadly, this is over-optimistic. I've manually checked some of the data that has been categorised as JSON and RDF. Most of it is not actually correctly categorised - either people clicked, say, 'RDF' when they meant to click 'PDF', or they have seen an RSS or Atom feed and categorised it as RDF. What this admittedly imperfect dataset is basically saying is that the vast majority of the 'data' on data.gov.uk is not actually machine-readable data but human-readable documents.
HTML - 252 XML - 5 Word - 4 RTF - 1 OpenOffice - 1 Something odd - 85 JSON - 9 Nothing there! - 190 CSV - 12 Multiple formats - 1211 PDF - 468 RDF - 10 Excel - 408 TOTAL - 2656
He will be at the Open Knowledge Conference this weekend, where he will speak about Citizendium and might do the analysis, which he told me was the most important part. When done, it will be very interesting indeed to read it.
Backstage, is approaching five years old believe it or not. So to celebrate I have asked Social technologist Suw Charman-Anderson of the popular Strange Attractor blog to put together a retrospective. Don't worry there will be data and mashups but we also want you all to share with us your stories and memories of the last five years
So the first project is image-based: We are looking for your favourite photos and images of Backstage and the stories behind them. The images might be a photo from a Backstage event that you really enjoyed, or a screenshot of a prototype you developed or a visualisation of BBC data that you put together. We don't mind what type of image it is, just so long as it's online and you can tell us a bit about it.
The second project is map-based: We'd like you to tell us what your
favourite experiences of Backstage were. Perhaps a prototype you put
together, an event you went to, or something else completely. We'd also
like to know where you are based (at whatever level of detail you feel
comfortable) so that we can see how far Backstage reached. When Backstage first launched it was mainly for the UK only but the internationalisation of Backstage was overwelming, so it would be great to see how far we're really talking.
Both mash-ups are based on Google Docs so the two forms are embedded in the page after the last link, or you can go straight to the pages directly by following... Mapping Your BBC Backstage Memories or Images of BBC Backstage. In both cases, if you add info to the spreadsheets we take that to mean that you're happy for us to reuse your contribution.
Our friends at Rewired State, recently had a hackday where Ben Griffins created a Greasemonkey script which,
Publishes links to relevant data.gov.uk datasets next to news articles on the BBC website. Provides important context for those articles and increased visibility for the datasets. Implemented as a simple greasemonkey Firefox script connecting to a simple search service built with Google's ajax search api.Not content with that Ben's already thinking about packaged it as a firefox toolbar rather than a greasemonkey plug-in. Moving away from reliant on google's search apis. and of course, if it supported more websites. There's also a potential to add crowd-sourced citations too.
There is something amazing about looking at stacks of data over a period of time, and BBC Archiver does exactly that. Some of you might even remember something like it called the BBC home page archive but James Holden's latest project snapshots the whole page and allows you to view the changes in animated way.
The News only version is here and the homepage is here.
James explains how it works,
Fantastic prototype, which we hope he can keep running for a long time to come.
The project is running on a C# app I wrote to correctly screen capture the page (harder than you'd of thought) and then using a local webserver it FTP's (via PHP) the resulting 3 images (thumb, medium and large) to the live server. The comparison tool (a link at the bottom of each image which is easily missed at the moment) is written and runs on the live server to compare the visual changes, written in PHP/GD. Obviously I haven't spent any time on the front-end site so that would be the next logical step.
In my head I'd see this as being the ultimate tool for archive.org. If you go "way back" using their tool you can see that resources are missing and indeed as the browser changes rapidly the result you see in newer browsers doesn't represent the look and feel the user got at the time which is an important point if you trying to look back at the way it was. Loading Netscape.com in Mosaic back in the mid 90's would have been an altogether different experience than in today's "Chrome's and Firefox's"