Posted by Kate Towsey on , last updated
In October 2017, we started working on a project called Better Radio Experiences. The aim of the project is to explore unusual and novel directions for radio and audio-related research in BBC R&D. We wanted the project to be grounded in user research and involve multiple R&D teams. This blog post shares what we’ve learned and accomplished in the past six months.
We’re a small interdisciplinary team. As part of the project, we’ve been exploring how to include lightweight user research in a design project that is very future-focused. So, instead of discovering user needs and then trying to satisfy them, we’ve explored user needs and real-world scenarios, then tried to make innovative leaps from those.
We’ve aimed to make things that push perceptions of radio and sound in interesting directions, but which are still connected to current user experience, albeit sometimes loosely. We’ll use these things (lightweight prototypes) to connect with younger audiences and build more things based on their ideas.
Defining our research participants and plans
Our aim was to leapfrog off research insights, not necessarily build directly to them, so we kept our research plan focused and simple, and explored ‘the fringe’ rather than the norm.
We recruited ten young adults aged 18-21 (an under-served audience) involved in creative arts, particularly music or sound production. We intentionally wanted to explore engaging with an edge-case demographic: young, creative thinking, audio enthusiasts. We wanted to learn about their experiences, frustrations and innovations with sound and radio, and challenge our own perceptions so that we could more openly explore ideas for the future.
Our participants took part in a seven-day diary study, followed by one-on-one interviews and a one-day co-creation workshop. We were lucky enough to work on this with three designers working at the intersection of robotics and pupetteering: David McGoran from Rusty Squid, Richard Sewell and Emma Powell, who helped us develop ideas and give them physical form. As a result of the insights gathered and the workshop, we’ve created five light prototypes to communicate potential, emergent ideas for future sound experiences.
New directions: lightweight prototypes
Many of our research insights mirrored the insights of larger scale research projects; this was validating (and interesting, considering our sample group was so small and biased). More importantly, though, the research provided countless ideas for things to make - things we might not have considered otherwise.
Over several months, we worked on digital prototypes using our platform for IP sound experiences, a Node application that uses Web technologies to provide the infrastructure to quickly build sound applications, which runs on a laptop or Raspberry Pi. Then we challenged the digital ideas further by exploring how they might take on a physical form. To do this, the team got together for a one-day maker workshop in London. This workshop was intentionally playful and experimental.
Here’s a bit about those prototypes.
Our research participants spoke about being frustrated with their phones and bluetooth speakers not being able to respond to the changing noise of their context: blow drying my hair/not blowing drying my hair; lots of people talking/people not talking; a pool party with my friends etc.
The Enloudinator changes volume in response to environmental sound. We used code developed by Matt Paradis and others in the Audio team to make this work. The code uses the Web Audio API and getUserMedia, all within the browser, to change the volume in response to background sound levels. During the maker workshop we used cheap paper fans and a plastic lid to explore visualising the change in level of sound.
Our research, along with previous larger research studies, indicated that emotions and lyrics are an important feature for young people listening to music. Our participants used music to help them process emotions, reminisce, and either “chill” or “motivate” themselves. Lyrics help them connect to the music emotionally.
Emoji Parade builds on that (albeit more playfully) and associates emojis with the music, displaying them on a small screen in a box. Eventually, we’d like to make Emoji Parade into a small wearable that, perhaps, you could wear around your neck.
Emoji Parade was also inspired by an idea from Mike Armstrong about subtitles for radio and, in making it, we’ve used our Kaldi transcription tool, with help from Matt Haynes.
Our young adults tended to listen to radio only when their context forced it. Streaming services, and services like YouTube and Soundcloud mean that they’re used to having personalised music and playlists on tap, and (if they’ve paid for the service) ad- and chat-free. Skipping is an important part of their interactions with audio, whether they’re streaming or playing their own playlists.
In response, we built the Everything Avoider. The physical prototype we made using a spongy globe, a spring, and some cardboard. You can hit the globe to tell the application if you want to avoid something. Listen out for the voice that announces what it thinks you want to avoid.
Object-based media is an important area of research for BBC R&D. One of the most interesting things about conceptualising media as atomic objects is the different levels of abstraction that you can apply this to. An ‘object’ could be a segment containing a particular person's voice (COMMA can identify these); it could be a song (the BBC's playout system keeps track of these); or a segment or a programme; or an entire programme, and so on. Avoiding gives you the opportunity to skip a particular voice or all voices, or a particular song or all songs by that artist.
We discovered that young adults often follow not just the music but also the life of the artists they’re interested in: fans engage with an artist’s life in-the-moment as they post on various social media channels. Young adults often discover music via friends, recommendation engines, or by listening to songs artists have posted or reposted (of other artists) on services such as YouTube or Soundcloud.
We explored producing an automatically generated radio show that would scour their feeds to bring them an audio version of the latest news and music posted by their favourite artists. We called the project ‘Peeps’, because this work was inspired by hip-hop artists like Lil Peep, Lil Vert, Lil Pump etc.
We’ve not finished the physical manifestation of Peeps as yet; the plan is to 'save' or ‘fill a bucket full’ of Peeps audio and then play it back as a show on-demand when the ‘bucket is full’. For now, here’s an audio sample.
Lastly we made a simplified audio version of CAKE, BBC R&D's object-based Cook Along Kitchen Experience (there is currently have a follow-up, the wonderful make-along Origami experience). We used the audio - kindly provided by Jasmine Cox - to explore an interesting idea inspired by our research result that young people want to listen to their own music.
Since CAKE is a piece of timed audio about making a recipe, we can play it to you when you need to do an action ("take the fish out of the pan"), and play your own music the rest of the time. If CAKE were broadcast, listening to something simultaneously with others then becomes an intermittent experience. You can think of it as like the RDS travel news 'TA flag' that breaks through a radio show to tell you time-sensitive traffic information, but with your own music as the default rather than radio.
The effect is very striking. You can imagine a ‘Crafting Hour’ implemented like this; or intermittent news-only radio; or a version of our Peeps prototype with real-time social media announcements as audio, akin to product designers Buckley-Williams' Bulletin but for a shared audience of Peeps-enthusiasts.
The next stage of the project is a hackday with members of BBC R&D, further exploring how ideas and technologies being developed within R&D might be adapted to the needs and experiences of young adults. We'll be using our platform for IP audio experiences to quickly develop radio-apps that can run on a laptop, or just as easily on a Raspberry Pi to make a physical device.
This week @BBCRD we built some fun cardboard radio interfaces to trial many ideas quickly. This is my team's Moodulator: A single dial to tune into any mood within your music collection. pic.twitter.com/BlrW3QCsVL— Kristian Hentschel (@kristianthorin) May 5, 2018
Making physical objects means that the radio apps can have physical interactions and reactions, for example touch, gesture, voice, lights, and movement. We use this to exaggerate and solidify features, making the ideas clearer and more appealing.
We plan to use the results of our work and the hackday to do evaluation and co-creation sessions with young people from UTC College in Manchester and Knowle West Media Centre in Bristol, then feed back the results of these sessions to our colleagues in R&D and further afield.
What an awesome day working with @BBCRD. Our students had the chance to demo eight state of the art prototypes and give their feedback on each one. This feedback will be used to develop new radio experiences for young people. #focusgroup #creativity #feedback pic.twitter.com/IKr1rNl05m— UTC@MediaCityUK (@UTCMediaCityUK) May 9, 2018
This post is part of the Internet Research and Future Services section