Prototyping Weeknotes #98
This week's weeknotes are brought to you by the letter 'R' and the sound of the new Magnetic Fields LP, and the highlighted project is 'Roar to Explore'
Roar to explore
Vicky S says "This still-work-in-progress project uses voice search as a way for kids to independently explore content and learn about animals by making their sounds. It also allows us to explore interesting new UI approaches which we think aren't currently being examined by products available on the market today
Where this came from
This interesting use case for CBeebies came from the Future Media UX&D team. It’s a search tool that allows young children to make the sound of an animal to get content about that animal. For instance, to get lions, roar like a lion.
R&D got involved to help develop the thinking.
Why we think this is interesting
Young children find it difficult to use a keyboard and mouse, they can’t easily use a search box and browsing doesn’t scale.
We would like to understand if voice search could be a useful navigation tool for kids, and could help them explore content independently.
There is a lot of research about voice and speech recognition but not involving kids expressing themselves with sounds, which also makes this work interesting.
What we’ve done so far
Feasibility study & demo: R&D explored the technical feasibility. Chris P, Chris L & Yves built an audio classification system using sample data of 30 children making 12 different animal sounds to see if a computer could be trained to accurately classify the sounds into the 12 classes.
· It is possible to train a classifier to detect animal noises
· Real world performance of the system is considerably worse than in the lab.
· Careful choice of the algorithms is necessary.
· Web sites do not have access to the host computer’s microphone by default
UX research: As our classifier demo isn’t robust enough to test with kids and the aims of the engineering research are different from the user experience research, we have split UX and engineering into two research tracks. For the UX research we are developing ideas for an experience prototype, which aims to test the concept and gain a better understanding of the behaviours and needs of our audience.
We began with some background research relating to information retrieval and behaviour in young children, and found the following:
· Young children are easily bored, emotionally driven, drawn to accessible info they understand/ recognize and find it hard to switch cognition between 2 places (i.e. looking at screen/looking at keyboard).
· They find metaphorical and highly textual interfaces very difficult.
· Formulating a search query is difficult for children, because they have little knowledge to ‘recall’ concepts or terms from their long-term memory.
· Young children tend not to plan out their searches but rather react to results they receive from the IR system - behaviour is a conversation. They are easily distracted and their actions are reactions to the information & interface.
· Generally, their search strategies are not analytical and do not aim precisely at one goal. Instead, they make associations while browsing. This is a trial-and-error strategy.
· They find the home page comforting to start a new journey from – once confidence and familiarity is built with the page.
We did some sketching and thinking about how to develop and evaluate an experience prototype, thinking in particular about 3 things: Initialisation of the task, type of primary feedback and aiding further exploration
We ran a quick pilot study with 4 & 5 yr olds, to test a prototype approach and evaluation method and to answer some initial questions. Can they make the sound of the animal they want to see from a selection on screen? Do they know (think) they are controlling the device with their voice? Do they want to explore further (after the first go) - If it holds their interest or their attention starts to wander.
Working with small groups of 2 or 3 children at a time, we showed the children a selection of 4 animals on a TV monitor. Three were recognizable CBeebies characters and one was a generic animal from a live action natural history programme. We asked the children to make the sound of the animal they liked best/wanted to pick.
We learned that kids interpret the animal sounds quite differently.This has an impact on how the system should be designed to give control to the user to make their own interpretations of sounds, rather than having to learn the ‘correct’ generic sound. That way, the kids can train the system to work for them. We also observed that kids touch the screen intuitively – even if the device they are using doesn’t afford touch (I was using a hybrid TV/computer monitor). Finally, the prototype needs to be very convincing for this audience. They will not suspend disbelief and they are curious about how things work.
In my test, I initiated the conversation with the children and gave them instructions. For the next iteration of our experience prototype, we would like to try different ways of initialising the task with children. We could ask the parent to direct the child, we could use an avatar to initiate a conversation and ask the child to mimic what they do and we could use a prop that affords sound input like a microphone to see which is most successful.
The aim of our research is to understand the following:
· How can we encourage children to use their voice as a controller?
· Do children understand cause and effect - (that their input has a direct effect on the results)?
· Where is voice good/ better than other forms of input like touch or gesture?
Technical considerations & recommendations
Further engineering work that would need to be done beyond the initial feasibility study
· Single user classifier
· Improve acoustic model
· Study impact of recording equipment on the result"
dddIn other project news:
This week I have been primarily working on debugging and making sustainable
the social bookmarks system - ie twitter harvesting and /programmes correlation.
Whilst looking at throughput numbers, I've noticed that there are 100,000
tweets per day for BBC national TV and radio stations, and 75,000 of those
per day correlate back to specific times in specific programmes.
Attended an HTML5 event at Google on Thursday, learning about WebRTC (http://en.wikipedia.org/wiki/WebRTC), WebGL (http://en.wikipedia.org/wiki/WebGL), and Web Intents (http://en.wikipedia.org/wiki/Web_Intents).
Prepared a presentation for the FI-Content plenary meeting that we are hosting.
Web Audio standards: the W3C working group has been very active this week, talking about expanding its scope to look at standardising MIDI in the browser, exchanging thoughts and feedback about the specs. Meanwhile, our project to start a BBC-centric prototype which will test and stretch the APIs got the green light, and we (ChrisL, Matt and I) will be starting work in earnest next week
This week I've been continuing to analyse the data from the NoTube 'Social Web and TV' survey, and using the initial findings to structure a user research workshop which will explore various themes around NoTube's Beancounter user profiling service (http://notube.tv/category/beancounter/) in more depth. The survey is still open if you'd like to be part of our research: http://svy.mk/zGjgfA
This week Andrew's been continuing work on the FI-Content dashboard including trying to diagnose a problem that was causing it to crash Safari. It turned out it was due to the BBC network which made him sad. At Google's HTML5 day Web Intents (http://webintents.org/) looks very promising as a mechanism for decoupling common web actions e.g. "share this page" with the web service that performs it. If it takes off, it's another step towards a more flexible web of loosely connected 'apps'. Interesting link: "The Web Is a Customer Service Medium" http://www.ftrain.com/wwic.html
Thought-provoking essay dismisses the notion of the web as a meta-medium that mimics TV, radio, print and asks "what is the question that the web as a medium is answering?"
Last Wednesday through Friday, I was helping oversee the Fusion Trainee Lab 2012 at BBC Academy: a brief was set on Wednesday and presented back Friday afternoon to the commissioners. As a mentor, I encouraged the teams to focus their ideas thinking, gave them tips around developing clear solutions and showed them ways to communicate their propositions simply and clearly.
FI Content: This week I have been guiding the development of the Dashboard, helping clarify the script for the lab study sessions next week and designing TV interface mockups to show the participants.
We ended our third work package this week, so we spent a bit of time writing deliverables and preparing for the review meeting, which went very well. I have been working this week on quick prototype to expose all the data we have generated within ABC-IP so far, which will hopefully become the basis for the UX work we are doing right now. The prototype is now mostly done, using Rails on top of a triple store accessed via SPARQL, and using RedStore for tests. I packaged all I needed for that prototype in my PPA (https://launchpad.net/~yves-raimond/+archive/ppa )
Meeting with I&A people to talk them through the work we've done so far. Lots of review deadlines this week, so I spent some time reviewing papers for various workshops/conferences. Lots of very high quality papers in there - I hope they get through! Giving a Prolog crash-course to the engineering team today - should be fun!
This week Chris has been continuing work on the automatic segmentation investigation. The approach he is taking is to determine regions of similarity within a piece of audio using the C99 algorithm ( http://dl.acm.org/citation.cfm?id=974309 )and then evaluate the performance of the algorithm using the WindowDiff metric (http://www.mitpressjournals.org/doi/abs/10.1162/089120102317341756).
Initial results are not encouraging with the C99 algorithm positing too many segments when compared to the ground truth data. He's been looking at collapsing neighbouring segments if they are shorter than a certain threshold to see if this improves the performance.
Next week we are carrying out user evaluations on attitudes to data privacy with 12 participants, 6 in our London lab, and 6 in our Salford lab. This week we have been preparing the final structure of the sessions, and putting together the materials that the participants will use.