Anti-bullying campaigners call for a ban on chatbot app SimSimi
A chatbot app which has been linked to cyber bullying has been taken down in Ireland.
Now there are calls by anti-bullying campaigners to ban the app in the UK too.
The app, called SimSimi, can be "taught" responses by any user when certain names or words are typed in.
It means that users - many of whom are school pupils - are getting abusive replies when they type their own names in to the app.
"I think they have a duty of care of suspend the service for all young people," says Liam Hackett of anti-bullying campaign Ditch the Label.
"Their content is predominantly negative, abusive, sexually graphic or violent."
He adds: "Essentially, you can go on there and put whatever kind of content you want and there's no repercussions for you."
"I've been a mental health activist for four years, and I've never seen anything as bad as SimSimi," says Jamie Harrington, 18, who was part of the campaign in Ireland to take down the app.
"We got 5,000 signatures on a petition, we spoke to Facebook, Google and Twitter's online safety teams and they agreed that SimSimi needed to be taken down.
"It was taken down after five days of campaigning."
SimSimi is currently at number six on the free chart in Apple's App Store.
Anti-bullying campaigners are worried that its being used for cyber bullying.
There are similarities between SimSimi and Ask.fm, a website where anonymous comments could be left about other people.
That site was linked to the suicides of a number of teens a few years ago.
However, SimSimi could be even more dangerous because of its use of artificial intelligence, says campaigner Jamie.
"[SimSimi] is gathering information and young people through their Facebook, Twitter and social media, as well as people inputting answers," he says.
Users can search their own names or username, and the app will show them what others have said about them.
The "teach" feature means that a user can enter some text and tell SimSimi what it should reply with when someone else types that text in.
Some people will wonder why so users don't just delete the app so they don't have to read abuse directed at them.
But it's not as simple as that, says Jamie Harrington.
"We live in a society at the moment where people are worried about what they hear about themselves, and nobody wants anybody else talking about them," he says.
"It has an emotional impact as well," adds Liam Hackett.
"Particularly if you are being bullied offline as well, the impact can be devastating," he says.
Liam's advice would be to not download the app in the first place.
"It can create things like paranoia, because it isn't a direct person who is actually saying it to you," he says.
"It is a robot on their behalf, which has purely user-generated content - and I am guessing it's pretty much impossible to find out who has posted that about you."
The app has actually been around since 2002, and is popular in South Korea and India.
It was banned in Thailand in 2012 after users taught it to insult the political leaders of the country.
ISMaker, the South Korean company behind the app, have not responded to requests for comment.