BBC R&D

Posted by Kat Sommers on , last updated

This is the first of a series of posts on the genesis of and ideas behind our project on Editorial Algorithms. We begin with a recollection of how our R&D team observed and learned from editorial expertise around the BBC and started wondering whether some of the knowledge needed for curation of content could be translated into automation and technology.

Editors at the BBC - whether they make radio programmes, TV shows or websites - make decisions on a daily basis as to the kinds of content they want to include. Those decisions take a lot of things into account: the audience first and foremost, the timeliness (why this story? why now?), the quality of the idea or content, and the BBC’s own guidelines.

The web is a vital tool in their research, but it is also vast, overwhelming and noisy, making it difficult to find interesting content.

There are search engines, of course, but they order results principally by relevance to the search query. Facebook surfaces content that is already well on its way to being consumed and shared (or well past it), while Twitter can sometimes resemble an echo chamber, showing only what those you already follow share. And then there are hundreds of search tools that offer different views of the web, from most shared, liked, bookmarked or most queried. None of them offer editors much in the way of finding good content that will surprise and inform their audiences, meaning research can be laborious and time-intensive.

THE CHALLENGE

We wanted to find out if we could automate those editorial decisions, or at least make it easier for BBC editors to find ideas from the vast wealth of content on the web.

When we asked them how exactly they made those decisions, we found it can be hard to pin down. Some of those decisions are made according to a list of guidelines, and some because of a list of known prospects, but most are made because editors simply know good content when they see it.

Ask most editors how they choose the content they do, and they’ll answer something along these lines:

  • “I just do"
  • “I choose that day's lead story"
  • “I look for what’s good"

It was our job to break down how they know. Or, to put it another way, it was our job to define what “good” means.

WHAT DOES ‘GOOD' MEAN?

When asked what makes a story ‘good’ or interesting or a lead, they often struggle to answer - not because they don’t know, but it’s hard to put into words. They just know. Years of experience and a deep understanding of their audience, area of expertise and format means that kind of decision happens in the blink of an eye. So if it’s hard to explain, how on earth were we going to break it down so a computer could understand?

We began by choosing a collection of articles representing the ‘best of the web’ every day for ten weeks, sending it to a small group of users via a prototype app on their mobile phones. Their feedback and some further desk research gave us our first insight into what ‘good’ means.

First there’s the content itself, which should adhere to the following rules:

  • It should be accurate, trustworthy and reliable (we are the BBC, after all)
  • It should be balanced, which means it should remain objective or provide at least two views of a subject
  • It should not be illegal or offensive, eg use particularly violent or offensive language
  • It should appeal to the target audience
  • It should be relatively topical and recent
  • It should be novel and offer the reader something they’ve not seen before
  • It should use good images, especially its lead image
  • It should be mobile-friendly
  • Advertising experiences shouldn’t dominate

And then there was its context. We noticed some articles needed further explanation in their new context - the headline, for instance, needed rewriting, or the image didn’t look right. That’s because content can’t be taken in isolation; any curator will tell you the gaps between paintings are as important as the paintings themselves.

WHAT MAKES A GOOD MIX OF CONTENT?

To work this out we talked to editors around and outside the BBC, including TV producers, radio and news editors, to add to our understanding of what makes a good ‘edition’, whether that’s a radio programme, TV show, magazine or selection of online content.

For an edition to be considered good it should:

  • Be from varied sources (we should not favour specific publishers)
  • Offer a good mix of topics
  • Be ordered sensitively. For instance, a harrowing article about FGM should not be followed by an article about the high street’s top ten best lipsticks.
  • Be broadly UK-based (as opposed to US- or Australia-based). This is simply because our users were UK-based, not because there’s anything inherently good about British content. One article from an American POV was ok, but more than two or three and it started to feel alienating.
  • Not be too serious. Users had told us that too many serious articles made an edition seem too worthy and boring, meaning the daily service quickly felt like a chore. A magazine editor backed this up when she explained that for every one serious article she commissions, she’ll include 8 or 9 light or funny ones. That ratio might seem off, but if you think of your favourite magazine, chances are most of it involves light reading - the flip, flip, flip of pages about latest consumer products or delivering quirky news - and there is one maybe two long serious articles that require concentration.
  • Users also reported a lack of focus when it came to editions. Some randomness was good, but it shouldn’t be too random. A unifying theme was required, whether that’s a topic (a special collection of articles relating to ‘mental health’, say) or a target audience (people under 25, or men who like cars) or a task (keeping up-to-date).

A human editor keeps all these things in mind when putting together a ten-item edition of online content. To work out how we might automate it, however, we needed a template that would determine the pace, tone and reading experience of an edition.

Magazines seemed like the closest thing to a collection of online content, so we emptied the shelves of a few newsagents and got to work identifying what - if any - structure magazines shared.

It quickly became obvious they shared a similar mix. First there was the table of contents, followed by  engaging and ‘flick-through’ content, the kind readers will flick past and not read closely, possibly about products or quirky news, with lots of formatting and images. Towards the centre were more in-depth pieces, and each magazine had one maybe two ‘hard-hitting’ pieces that required extra concentration, followed by some lighter pieces, and finally, the end.

Based on this research we were able to structure the following template:

  • 1. Talking point: the first item should be topical, significant and the kind of story that gets people talking. This is what editors refer to as a ‘lead’. It sets the tone for the rest of the edition.
  • 2-4: Short and funny: The next three items were deliberately quick to consume, extremely visual and skimmable. Maybe they were gifs, short video clips, memes. This is what the internet does best.
  • 5: Sit back and watch: here’s some labelled video content that demands attention and time, to watch now or save later. Labelling video content as such is important on mobile devices; not everyone has the time, headphones or bandwidth to watch it right now.
  • 6: Long read: a long-form article that demands attention and time.
  • 7 -8. For you: two articles based on personal preferences. (We were imagining that users could choose certain settings, like the Guardian app, or the app would learn over time what topics or type of content the user liked, as with Google Now).
  • 9. Wildcard: here’s where we had a chance to forget the ‘mix’ and capture a reader’s attention with something serious, attention grabbing and off centre. Whereas ‘For You’ would take account of the user’s personal preferences, ‘Wildcard’ would do the opposite.
  • 10. Final word: an opinion-based piece - this is a gem that’s almost a ‘reward’ for reading the edition. Most magazines, whether they are dedicated to computers or fashion, have a final piece like this, often opinion pieces with an illustrated byline.

After this first phase we had a much better understanding of what ‘good’ is, and how we can choose a selection that represents a 'good' mix of content. The next step was to work out how we might turn it into something a computer could understand.

CAN WE AUTOMATE THE DISCOVERY OF ‘GOOD’ CONTENT ON THE WEB?

We designated each of the types of content outlined above (‘long read’, ‘short and funny’) ‘recipes’. For a computer to be able to make sense of them, we needed to break each of them down into its constituent parameters. For instance, a ‘long read’ was something that was long, but also of a high reading level.

We quickly realised that the tone of an article was as important as its subject. For something to be “short and funny”, for example, we needed to distinguish between articles that were serious or the opposite of serious (after much discussion we agreed this was not ‘funny’, that’s too subjective, but simply ‘light’). We had to be careful to ensure this parameter related to tone not subject, however, as it was easy to assume that because an article dealt with a serious topic, that meant it had a serious tone. What we found was that an article could be deadly serious about a light subject, say the latest eyeshadow or Kanye West, or it could take a serious subject like rape or a terrorist attack very lightly.

All in all we identified 32 parameters, including categories, tone, sentiment, readability, mobile-friendliness and the ability to know people, places and topics mentioned.

In the next article in this blog series, we look back at how we audited existing metadata on the open web, and ended up deciding to roll our own algorithms to help us determine all those parameters.

This post is part of the Internet Research and Future Services section

Topics