Ahmed Razek, Sinead O'Brien
Technology, Strategy & Architecture
Propaganda, deception, suppression of free speech, have all been enduring issues for every society, but in recent years terms like ‘fake news’ and misinformation have been heard in public discourse with alarming regularity. So, what is happening to make it a live issue for a news organisation like the BBC?
One significant factor is that a whole range of technologies categorised as Artificial Intelligence and Machine Learning have unleashed a potent range of disruptive capabilities on a previously unimaginable scale, making it possible to create profoundly misleading content including fake audio and video. At the same time the growth of social media means it is now easy to distribute deceptive content to a worldwide audience.
Credit: Getty Images
The problems created by online misinformation are not trivial, and the threats to society are genuine. Take the recent emergence of the anti-vaxxers movement where false information about the dangers of life-saving vaccines targeted at a newly receptive and sizeable audience across social network platforms led some parents to put their children at medical risk. Though the dissemination of this material is not illegal, it is undoubtedly harmful.
Influencing or subverting democratic norms isn’t just about being able to manipulate people; it’s also big business. There is a lot of profit to be made by telling people what to think, and social media has become the cheapest way to accomplish this.
Social media and video hosting services are playing a significant role in circulating misinformation, both on public channels like Twitter and over encrypted messaging services like WhatsApp. There have been worldwide calls for media and technology companies to take more responsibility for content hosted on their platform.
In the UK, a recent report on misinformation by the Commons Select Committee suggested that a new category of tech company is formulated, “which tightens tech companies’ liabilities, and which is not necessarily either a ‘platform’ or a ‘publisher’”. At the same time, the UK Government plans to consult on its ‘Online Harms’ White Paper, a joint proposal from the DCMS and the Home Office. A public consultation on the plans is currently running for 12 weeks until July 1st 2019.
Germany recently implemented the Network Enforcement Act, which forces technology companies to remove hate speech, fake news and illegal material or risk a heavy fine. Notwithstanding freedom of speech concerns, it is not clear that the law is working as intended, despite placing a heavy burden on the platforms.
Lawmakers are clearly dissatisfied with the status quo, but it remains unclear what new types of responsibility will be placed on online services as a result. Conjuring up workable law to control what appears online is hard, and any regulation is unlikely to be universally acceptable.
Outside of regulation, there is growing consensus around the need for greater media literacy campaigns. It is vital that we teach people of all ages to be critical consumers and sharers of information, especially in the online world. However, it is unclear when the wider society will reap the benefits of such a media literacy program and the health of democracy cannot wait for a younger, more media-aware, generation to grow to maturity.
Problems arising from the spread of misinformation are not confined to these online spaces. Last year, mob lynchings across India were fuelled by the spread of disinformation spreading across encrypted messaging apps. The tension between privacy and data security means that harmful content can spread like wildfire without anyone being held accountable. Since then, WhatsApp has restricted forwarding messages to a maximum of five people.
Removing or reducing the impact of content that contains verifiably false assertions is difficult but tractable. Traditionally, the role of debunking deceptive claims has fallen to competent journalists. Given the mammoth scale of the problem, algorithmic interventions are needed. However, outsourcing the ‘half-truth problem’ solely to algorithms has thus far proven ineffective and exceptionally difficult to handle, in part because cases of misinformation are often not clear cut and rely on careful interpretation.
Given these difficulties, the case for public service organisations like the BBC to take a leading role in the fight against online misinformation is a strong one. Widespread online misinformation strikes at the heart of our public purpose to provide accurate and impartial news. However, the size of the challenge is unprecedented. Our online information ecosystem is polluted.
For its part, the BBC is committed to being part of the push back against the wave of misinformation, distraction and deceit that characterises parts of the online world. Over the coming months, the BBC, alongside other organisations, will be looking at a whole raft of practical actions that might be taken to address misinformation across the media landscape. These interventions will sit alongside our continuing editorial coverage and initiatives like the ‘Beyond Fake News’ project.
Our approach will be cross-disciplinary; connecting researchers, designers, academics, policy makers and technologists with journalists. The impact of misinformation reaches far and wide. This conversation is not just about journalism; it’s about preserving the information that underpins society, it’s about policy, technology, humanitarian organisations and public trust.
Neither the BBC nor its partners will entirely solve the problem of misinformation, online or offline, but we are doing our part to ensure that trustworthy information derived from competent, honest and reliable sources continues to flow freely across society, giving audiences around the world a space where they can find news reports they can rely on.