Olly MursGetty/BBC Three

How extremists and terror groups hijacked social media

Share this:

“Everyone get out of @selfridges now gun shots!! I’m inside.”

Olly Murs is tweeting from Selfridges department store in London. Piles of chinos and artfully distressed cardigans are hurriedly discarded as men and women run screaming, fearful of a Westgate-style mall attack.

“Really not sure what’s happened! I’m in the back office... but people screaming and running towards exits!”

Murs continues tweeting from the safety of an office as the store empties.

“Evacuating store now!!! **** heart is pounding.”

Later, he checks his phone for replies. There is one that catches his eye: Piers Morgan.

“Stop tweeting mate. Nothing happened.”

There was no terror attack in Oxford Street on Black Friday (24 November) 2017. That's because there was no attacker. A few hours later, when Oxford Street was no longer in lockdown, police would describe an ‘altercation’ on the tube. This relatively minor incident had set in motion an enormous chain of events – a well-oiled Rube Goldberg machine that amplified the hysteria on the ground.

It was the latest evolution of the relationship between social media and terrorism: a terror attack without any terrorism.

It’s this relationship that the veteran war reporter David Patrikarakos explores in his new book, War In 140 Characters.

“Social media platforms are geared towards outrage and they’re geared towards hysteria,” he says to BBC Three. “If someone tweets something then that has the capacity to go viral and cause panic in a way it previously couldn’t.”

Social media changed the way wars are fought.BBC Three

The monochrome flag co-opted by groups including so-called Islamic State

We are supposed to be much savvier about social media than we used to be. We can spot fake news, and we are wise to sock puppets and Russian bots.

“People are getting more savvy,” agrees David. “But things are going to get worse before they get better.”

David's pessimism is partly inspired by the story arc of the so-called Islamic State – a terror group he emphatically believes "could not have existed" without the platform of social media.

In War In 140 Characters, he gives an example of how IS’s social media presence helped them on the battlefield, during the now-notorious Fall of Mosul in 2014.

He explains the Iraqi army were bigger and far better equipped – but they fled the city when they saw IS’s brutality plastered all over social media – in the form of severed heads and execution videos.

The group also promoted life under the nascent Islamic State group, posting footage of the group’s members cut like music videos. The 'caliphate' even posted pictures of fighters cuddling kittens- perhaps because, as former US Assistant Attorney General John P Carlin put it, “kittens sell”.

In this way, it recruited men, women and even children from all over the world – including the Bethnal Green Academy girls – through Facebook and Twitter.

IS quickly realised, however, that its most violent material was most effective at spreading terror and building a rogue state. This arguably began with the execution of journalist James Foley in 2014.

James FoleyGetty

The execution was a grotesque PR stunt, deliberately staged for maximum virality, from the rehearsed lines of the masked executioner, ‘Jihadi John’, to the victim’s orange jumpsuit. It was shared widely on Twitter and made front-page news all over the world.

David believes that the publicity generated by the incident spurred IS to commit escalating violence.

“Islamic State is in, or was in, the news-making business. So they started coming up with increasingly horrific ways [of committing murder]. They go from beheading, to immolating, to submerging in water.”

At one point, these videos were unavoidable on Twitter. As a result, “There is a great inuring,” says David. People become accustomed to violence, and it no longer has the same effect on them. A recent upload of the execution of Syrian pilot Azzam Eid by an IS propaganda outlet was not covered by most media outlets, for example. Neither were IS’s crude and badly photoshopped Christmas ‘postcards’, calling for supporters to strike London and behead Santa.

Islamic State is losing ground online and off, but Black Friday showed that our knee-jerk response to the merest hint of an ‘incident’ can still provoke panic and terror.

One of social media’s strengths is that it can unify people around the world through shared ideals. But this includes those with extremist views.

For a while, Twitter promoted itself as a sanctuary for free speech, and in time it became a rallying point for extremists. This all too frequently led to anonymous, bigoted online attacks on others.

This became a big enough issue during the general election campaign that Theresa May ordered a parliamentary inquiry into abuse aimed at politicians. MPs of all parties shared their stories of online attacks in a parliamentary debate.

Social media also gave those on the far left and far right a place to co-ordinate IRL activity. This summer’s white nationalist rally in Charlottesville, Virginia, was organised using a Facebook event.

In the UK, English Defence League (EDL) protests are frequently met with Anti–Fascist Network (AFN) counter–protestors – often resulting in violent clashes and arrests on both sides.

Both groups organise via regional Facebook groups. The AFN encourages groups of friends to set up their own anti-fascist groups, and specifically suggests Facebook and Twitter as powerful ways of doing so. However, its website adds that Facebook profiles have to be “tightly controlled due to security reasons” and that Twitter users should not "divulge too much information online" - a nod to the risks of posting personal information.

Protestors in Charlottesville, Virginia.Getty

The threat of online abuse is, for some, always there. Back in 2015, one outcry over abuse on social media became too big for Twitter to ignore. People were appalled at the racist harassment received by actor Leslie Jones after she took on a lead role in the Ghostbusters reboot. Twitter changed its rules to actively crack down on abuse and what it describes as “hateful conduct”.

Some of its new policies are yet to come into force – a rule banning hateful imagery and display names doesn’t come into force until 18 December 2017 – but the company insists it is doing more than ever before to actively wipe out extremist propaganda.

However, beyond Twitter and Facebook there remains a factory of loyal meme-crafters working away on websites like Reddit and 4Chan. They trade in offensive propaganda that sometimes aims to push the boundaries without overtly crossing them.

In October 2017, regulars on 4chan printed and distributed posters in several states across America that carried the simple message: “It’s OK to be white.” The posters generated plenty of publicity.

David thinks social media companies can’t (and perhaps won’t) deal with this on their own.

“Self policing doesn’t work. It hasn’t worked,” he says. “Social media companies have to be regulated. There is too much stuff out there that shouldn’t be.”

It is surprising to hear a journalist calling for censorship of any kind. But freedom of speech should not be at the expense of freedom from abuse, he says.

“There’s a difference between free speech - which is legal - and hate speech, which is incitement to violence, and is illegal," he says. "You go across social media sites and hate speech is everywhere. I mean, really – it’s everywhere.”

The uncomfortable role of social media in bringing extremist groups and terror threats to prominence, grooming recruits, and disseminating terror, is clearer than ever.

Scrolling through Twitter on Black Friday triggered a terror response – proving how conditioned we are to such threats. And so long as the triggers continue to circulate, our fingers will continue to twitch.