IRFS Weeknotes #114
Week 114 started in full swing on our FI-Content project by looking at the interaction models for authentication on TV that Theo and Joanne developed last week. The team assessed their feasibility and which ones would meet the deliverable requirements. In parallel to this, there was lots of attention on the Chrome extension user trial that ended on Monday and best ways to extract the data. Joanne's been juggling designing data analysis methodology with wireframing initial user experience for the authentication strand. Parallel to all this Barbara's been working with Chris Needham on actions needed following the plenary meeting he attended last week. It's going to be a busy summer on the FI front.
Several members of the team had their pitches accepted for the "build" phase of the Weather & Travel News Connected Studio which happened on Tuesday and Wednesday at Mozilla's lovely London space. Olivier worked with agency Kite on mixing Things To Do data with the Weather and relevant Travel News. Chris and I worked on our idea to enable schools to build connected weather stations. We built a prototype on Heroku showing how we'd get people to make simple observations whilst Pete and Andrew fleshed out interfaces with more complex data and an achievements system.
Andrew and Chris at the Connected Build Studio
The central lab received many guests for the final review for P2P-Next project. Dominic presented the research outputs from our work package, and Chris Needham presented and demonstrated LIMO.
Olivier and Tristan went to the Guardian's Activate London conference, nominally about "how technology can change the world and make it a better place", where they learnt about beautiful sewer systems, giant space tomatoes, that Wolfram Alpha is built on a grid of mathematicas, a changelog for Polish laws, the guardian project to harden up citizen video and how Wordpress and Skype manage their work.
Yves had a telecon with the European Broadcasting Union to discuss a new proposed standard around conceptual modelling of broadcast-related data. He also presented some of the theory behind the speaker identification and segmentation work we've done on the World Service archive to the BBC's Machine Learning interest group. He started an automated tagging task, using all text available around programmes within the archive and a custom WikipediaMiner instance, using DBpedialite to match the resulting tags to DBpedia. Finally, he's written some code to make use of Ookaboo images which are now brightening up the Episode and Tag pages within our World Service prototype.
Matt's been moving massive amounts (10TB) of image data from the Snippets data store to share with some partners working on image recognition trials.
Last but not least, Chris Newell's been exploring the feasibility of an experimental "Clip Finder" application which would provide an easy and personalised way to browse through the latest TV and radio clips available from the BBC. He's also been reading about Netflix's approach to recommendations and personalition.