The UK's major internet service providers (ISPs) are to introduce new measures to tackle online extremism, Downing Street has said.
The ISPs had "committed" to strengthening their filters and adding a "public reporting button" to flag terrorism-related material.
But the ISPs told the BBC that no specific agreement had been made.
Campaigners called for transparency over what would be blocked.
Prime Minster David Cameron said technology companies had a "social responsibility" to deal with jihadists.
Mr Cameron told the Australian Parliament in Canberra that he was putting technology companies under pressure to deal with jihadist material.
"In the UK we are pushing them to do more, including strengthening filters, improving reporting mechanisms and being more proactive in taking down this harmful material," he said.
"We are making progress but there is further to go. This is their social responsibility - and we expect them to live up to it."
The proposed measures are believed to have stemmed from a meeting held last month to discuss ways in which technology firms could help tackle online extremism.
In a briefing note, No 10 said the ISPs had subsequently committed to filtering out extremist and terrorist material, and hosting a button that members of the public could use to report content.
It would work in a similar fashion to the reporting button that allows the public to flag instances of child sexual exploitation on the internet.
However, the BBC understands that while the ISPs agreed in principle to do more to prevent extremism, they have not actually committed to the measures outlined by No 10.
"We have had productive dialogue with government about addressing the issue of extremist content online and we are working through the technical details," a spokeswoman for BT said.
A spokesman for Sky said: "We're exploring ways in which we can help our customers report extremist content online, including hosting links on our website."
All major UK ISPs have had to offer all customers the option to filter out certain types of content - such as pornography or gambling - at a network level.
The same filtering technology would be used to block out content deemed to be extremist in nature.
However, this plan presents logistical problems as extremist groups such as Isis typically use channels like YouTube or Twitter that are popular for entirely legal purposes.
The Open Rights Group, which opposes internet filtering, said using the technology to block extremism would be ineffective, and called for more openness.
"We need transparency whenever political content is blocked even when we are talking about websites that espouse extremist views," Jim Killock, the group's director, told the BBC.
"We need the government to be clear about what sites they are blocking, why they are blocking them and whether there will be redress for site owners who believe that their website has been blocked incorrectly.
"Given the low uptake of filters, it is difficult to see how effective the government's approach will be when it comes to preventing young people from seeing material they have deemed inappropriate.
"Anyone with an interest in extremist views can surely find ways of circumventing child-friendly filters."
Concern over the proliferation of extremist content online led GCHQ director Robert Hannigan to warn that technology companies were allowing their platforms to become "the command and control networks of choice" for groups such as so-called Islamic State.
To help deal with the problem, the Met Police set up a dedicated Counter Terrorism Internet Referral Unit (CTIRU), tasked with trying to remove terrorism-related material.
Since its inception in 2010, CTIRU has removed more than 55,000 pieces of online content, including 34,000 pieces in the past year.
Follow Dave Lee on Twitter @DaveLeeBBC