Main content

Project Origin

Jatin Aythora

Chief Architect

Covid-19, polarised protests, the upcoming US Election - there’s never been a better climate for disinformation to thrive in, and for it to cause real world physical and societal harm.

Tackling it is a priority for the BBC but we know that we cannot do it alone. Solutions - be they technical, operational or educational - rely on a collaborative approach between publishers, technology companies, social media platforms and others.

With this in mind the BBC has joined forces with the Canadian Broadcasting Corporation/Radio Canada, Microsoft and The New York Times to focus on one aspect of the disinformation challenge - content source verification. We’ve called this work Project Origin. 

Have you ever seen or shared a post on social media, and then found out the story wasn’t quite what it seemed?

If you have you are not alone. Most large social media platforms have features such as verified pages or accounts but outside of these, there are countless re-posts of content that was originally published by another person or organisation.

In some cases, this content is simply re-uploaded and shared. In others, a re-upload is accompanied by some new context, for example - a comment on the picture or video. Others modify the content - for humour, for brevity, and in some cases, with malicious intent. 

How can you avoid being tripped up by content that may have been altered?

The goal of Project Origin is to apply clear signals to videos and pictures from publishers so you know the content is coming from where it says it is and has not been manipulated.

This can be broken down into three main areas.

  1. Give the content item an indicator of authenticity
  2. Find a way to allow the content to take this indicator of authenticity with it on its journey around the internet
  3. Safely store the authenticity information in a way that will allow it to be checked.

How will we do this?

Firstly, each digital image, video or audio file is represented by a very specific computational sequence  (or specific fingerprint). These are represented as a short string of code known as a Hash (Secure Hash Algorithms - SHA).  The data is so specific that we can be confident that there is effectively a zero probability that two pieces of content share the same hash.

To know who generated the content hash, we need another tool - a key.

Keys are commonly used on the internet and they allow a publisher to digitally sign a document which is linked to a piece of content. Think of it as a digital watermark that cannot be removed.

Therefore, we should be able to have a machine-readable way of representing data about content in a way that allows the creator to tie or ‘bind’ the specific content with the data and have it stored safely for future retrieval by another publisher.

What does this mean for the future?  

If this technological approach if it is widely adopted by publishers and media platforms, then it can provide an automated signal warning of manipulated or fake media. It has the potential to help publishers and technology companies deliver greater assurances to their users and audiences that the content they’re consuming is authentic.

Tackling disinformation is a huge task and we’re very much aware that others are doing excellent work in this space, as well as in the wider disinformation sphere. However, we believe that the work we are doing can add unique value. 

The Trusted News Initiative (TNI) is an industry collaboration of major news and global tech organisations that are working together to stop the spread of disinformation where it poses risk of real-world harm.

The TNI has agreed to engage with the Project Origin prototype which will be available later this year.

We look forward to sharing more with you about Project Origin as it develops. In the meantime, you can find out more here.

More Posts

Previous

BBC iPlayer highlights 5-11 September

Next

BBC iPlayer highlights 12-18 September