The world's biggest internet companies have written to UK government ministers to outline how they believe harmful online activity should be regulated.
The digital giants, including Facebook, Google and Twitter, say the difference between illegal and harmful content must be formally recognised.
They also say planned legislation should make a distinction between public and private communication.
The letter has been co-ordinated by the trade body, the Internet Association.
It has been sent to the culture, health and home secretaries.
The letter will be seen as a pre-emptive move in the coming negotiation over new rules to govern the internet. The government is due to publish a delayed White Paper on "online harms" in the coming weeks.
It follows the recent outcry over the death of Molly Russell, the 14 year-old girl who died after seeing pictures of self-harm on Instagram, which is owned by Facebook.
Following extensive coverage of the case, Instagram said it would remove all graphic images of self-harm on the platform.
In the letter, the companies outlined six principles they said regulation must conform to. They said it must:
- "Be targeted at specific harms, using a risk-based approach
- "Provide flexibility to adapt to changing technologies, different services and evolving societal expectations
- "Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy
- "Be technically possible to implement in practice
- "Provide clarity and certainty for consumers, citizens and internet companies
- "Recognise the distinction between public and private communication"
Many leading figures in the UK technology sector fear a lack of expertise in government, and hardening public sentiment against the excesses of the internet, will push the Online Harms paper in a more radical direction.
That paper has been in the works for many months, and this late intervention, which follows extensive contact between the technology sector and government, shows the industry believes it can still bend that paper toward its will.
Three of the key areas of debate are the definition of online harm, the lack of liability for third-party content, and the difference between public and private communication.
The companies insist that government should "recognise the distinction between clearly illegal content and content which is harmful, but not illegal".
The forthcoming government paper will need to state explicitly what online harm is.
If these leading tech companies believe this government definition of harm is too broad, their insistence on a distinction between illegal and harmful content may be superseded by another set of problems.
The companies also defend the principle that platforms such as YouTube "permit users to post and share information without fear that those platforms will be held liable for third-party content".
This is the latest instalment in the long discussion over whether some of these companies are platforms or publishers - the latter category coming with legal and moral duties that the former category escapes.
It is obvious that a new category of company will have to be created to accommodate the special characteristics of technology platforms that distribute, and create, vast quantities of media.
Another area which will be of particular interest to the Home Office is the insistence that "care should be taken to avoid regulation encroaching into the surveillance of private communications".
After the Westminster Bridge attack of 2017, there were plenty of calls for there to be no safe spaces for terrorists online.
Then Home Secretary Amber Rudd said at the time: "We need to make sure our intelligence services have the ability to get into situations like encrypted WhatsApp."
But it is dangerous to interfere with encrypted communication, of the kind that WhatsApp (also owned by Facebook) enables, because if you breach one end of the encryption you potentially allow all manner of criminals, foreign agents and bad actors to exploit that vulnerability.
This area will likely remain a point of contention between technology companies and law enforcement agencies.
What happens next?
Regulation of the internet is fraught with difficulty. That's why it hasn't advanced as quickly as some politicians and members of the public wanted.
Innovation is fast and unpredictable; regulation is slow and consensual.
Technology companies are often global in reach, whereas parliaments are generally national. Getting the buy-in of many countries takes time.
And then there is the volume of content. YouTube is not a broadcaster, like the BBC for instance, in large part because of how much of it there is: hundreds of hours of video uploaded every minute.
On the issue of principle, this letter makes clear that the companies - many of which originated in the libertarian spirit of California - are opposed to censorship. Granting them more power to pre-judge what should go online in the first place may pull the West's internet - open, free, with all the associated costs - toward the more closed version that exists in places like China.
These issues begin to explain why the Online Harms paper has taken so long. The UK government will be looking at examples around the world to see what can be learned - from Europe's data rules GDPR (which apply in Britain) to Germany's use of hate speech law to fine companies for leaving harmful material online for too long.
In writing to cabinet ministers now, the world's leading technology companies have indicated their awareness that the government is yet to fully make up its own mind.