Posted by Kate Towsey on
I’ve just finished a two-year run working as a user researcher for BBC R&D. This was my first stint working in an R&D environment, and the experience gave me much to think about. But first, a little necessary background.
Both prior to and during my time at BBC R&D, I spent three years working with user researchers at the UK’s Government Digital Service (GDS). GDS focuses on delivering digital government services that are highly-useable by broad sets of audiences, but not necessarily innovative. Their mission is to deliver services that work right now for thousands of people. I learnt most of what I know about user research at GDS; it was a master course in agile design research.
The user research methodology and experience I brought to BBC R&D was inflexibly GDS in style. I did a Discovery and learned about users and their needs; I tested a prototype that had already been built with a group of users; and, as a result, we learned more about our users and what worked and what didn’t. I tried to squeeze the developers, designers and data scientists I was working with into 2-week cycles of design iterations and user research. This is something that works very well at GDS, but didn’t work so well at BBC R&D.
I’ve always tended towards being puritanical, and my first reaction was that we simply weren’t doing ‘proper agile’. We were doing two-week sprints and reviews etc., but it wasn’t being driven by user needs and user research. In my mind, user research was the heartbeat of the sprint. It was the thing that drove what was on the Trello board and what should be achieved. I was frustrated. How could I do good user research if I couldn’t deliver regular iterations to users and therefore deliver regular insights to the team? Why were so many of the user needs being overlooked (even if appreciated), while development and design time was being dedicated to things that users didn’t necessarily need?
When I arrived at BBC R&D, the team were experimenting with how to use algorithms to automatically extract editorial metadata about web content with the aim of making it easier to find the right content for the right audience. (I do realise that I’ve just described the holy grail of the web.) We were working with several partners across the BBC who might find the technology useful. One of those partners was a group of researchers and journalists whose job it is to monitor large amounts of news content about particular people, places and things. If we could make it easier for them to see only content that was relevant to them, we would be doing a good thing.
As already mentioned, we had a prototype and I used it to learn more about our users and what worked and what didn’t. The prototype wasn’t built to satisfy users’ needs, however, it was built long before user research came on board and as a means for demonstrating the experimental features we were working on. Still, it somewhat matched the needs of the journalists and researchers and we used it as a starting point. This worked well.
Users needed what our core technology could offer, but we learned through research that the prototype didn’t deliver it in a way that suited them. To be more specific, our users were professional scanners and needed a more compact UI, and they needed to be able to bookmark, search for items, and filter items by time, amongst other things. This is where my service-driven user research style butted heads with R&D.
"In R&D there are any number of rabbit holes one could go down. Everything can be challenged and changed. And that’s not something you can do in two-week sprints."
Designing and implementing a more compact UI, bookmarks and a recognisable search mechanism, isn’t necessarily going to progress technology. Let me expand on that, because that’s not entirely true. The needs and activities around bookmarking could be broken down and reinvented, and the need for a more compact UI might inform a completely different way of presenting news feeds. However, as I have learned, in R&D there are any number of rabbit holes one could go down. Everything can be challenged and changed. And that’s not something you can do in two-week sprints. Besides, if an R&D team takes on every user need as a challenge to be experimental, it’ll be chasing its tail till Singularity becomes an actual thing.
There’s a more practical point too. Things like a compact UI and bookmarking need design, development and research time, not to mention project management time. As almost everyone reading this will know, you don’t just do a bit of design and delivery and all is well. Making something that works takes a lot of time and resources. If a small R&D team gets involved in developing a product for a group of users, it may easily get sidetracked and no longer have time or brain-space for being experimental - for being R&D. It’s also unlikely that a funding body will applaud an R&D team that’s used funds to build things that the world already knows well.
It took me a while to wake up to all of this. I had ‘methodology blinkers’ on. I was so focused on trying to make the project fit into the research methodology I knew, I couldn’t see what we were trying to do.
Once you realise something, it seems obvious. R&D’s KPI is not about delivering something useful or delightful to a group of users, it’s about developing existing technologies and inventing new ones that might solve perceived future challenges, and possibly problems of the here-and-now. Even more, R&D teams, in my experience, have much more open space to work in. Although you’re looking to achieve new insights and create interesting and exciting things, you don’t necessarily need to know exactly who and how those things will help people in the future. You’re in an experimental and future-thinking space, as opposed to a delivery, service-driven one.
Once I understood this, I eased off on my quest to make something that works for users right now, and started to rethink my approach to user research in the context of R&D. I’ve not yet consolidated my thoughts - I need more experience working within an R&D environment and many more conversations with future-thinking researchers and designers - but there are a few things that are worth noting down.
...do research with users who are on the extreme ends of your demographic - the die-hard fans and fanatics...
You’re either going to develop a future-thinking service in complete isolation of users, or you’re going to include users in the process, which is what we opted to do. I think user research is a very good thing to include in an R&D team, but the approach needs to be adjusted to support experimentation and exploration rather than product or service delivery. Having spoken to a few experienced future-thinking user researchers, these pointers seem good: do research with users who are on the extreme ends of your demographic - the die-hard fans and fanatics, and those who are only vaguely interested or even dislike the thing you’re researching. Talk to stakeholders who think about and plan for the organisation’s future and find out what they’re envisioning and why. Learn about your current users’ workflow, pain points and delights, just as you would if you were working on delivering a service. Realise that innovation doesn’t happen in two weeks. As the researcher, you need to join the team in thinking outside the box and creating, and not just researching and reporting on user insights. You should keep referring back to and building on what you know about potential end users because although it may not be responded to right away, it is useful.
Having said all that, I did get my wish and we did dedicate design and development time to looking after primary user needs. The result was interesting.
During user research sessions, which happened every two to three months, I’d invariably come back to the team with minimal research insights on algorithmic developments, and lots of insights on the basics - less white space and bookmarks, please!
When I showed the users a prototype that featured a more compact UI and a bookmarks feature, they fully engaged with the prototype’s more experimental features for the first time. As a result, we learned much more about the things we were developing and most interested in, as opposed to repeatedly hearing about “too much whitespace”.
I don’t have any grand conclusions to make as yet - you can only make conclusions when you’ve proved hypothesis either right or wrong, and I’ve not had that opportunity yet. I do know however that were I to work in an R&D environment again, which I’d like to, I’d approach the research with a different and more open mindset. I’d watch for opportunities where R&D shares borders with Product - I’m interested in the relationship between the two - because ultimately it’s satisfying for everyone if the technology you’re making is used. And I’d certainly keep a sharp eye out for those times when non-experimental features are worth investing in because they may help you learn about the things you’re making that are truly R&D.