Ofcom has warned social media companies they will be punished if they fail to take significant extra steps to address the problem of children pretending to be adults online.
A newly released survey, conducted by the UK media regulator, indicates 22% of eight to 17 year olds lie that they are 18 or over on social media apps.
This is despite the Online Safety Act (OSA) requiring platforms to beef-up age verification, a responsibility that will come into force in 2025.
Ofcom told the BBC its “alarming” findings showed tech firms had lots to do to meet that new legal standard – and said they would face enforcement action if they failed to do so.
It said children being able to pass for adults increased their risk of being exposed to harmful content.
“Platforms need to do much, much more to know the age of their children online,” Ian Mccrae, Director of Market Intelligence at Ofcom told the BBC.
He added 2025 was a “huge year” in which there should be a “real step change in online safety.”
He said Ofcom would “take action” if firms didn’t comply with the OSA, pointing out that the legislation allowed for companies to be fined 10% of their global revenue.
‘So easy to lie’
A number of tech firms have recently announced measures to make social media safer for young people, such as Instagram launching “teen accounts.”
However, when BBC news spoke to a group of teenagers at Rosshall Academy, in Glasgow, all of them said they used adult ages for their social media accounts
“It’s just so easy to lie about your age”, said Myley, 15.
“I put in my actual birthday – like day and month – but when it gets to the year, I’ll just scroll ten years back,” she added.
“There’s no verification, they don’t ask for ID, they don’t ask for anything,” added another pupil, Haniya, who is also 15.
BBC News was also unchallenged when it set up accounts, using newly created email addresses, on a number of major platforms.
A user age over 18 was entered without any proof being requested.
Ofcom says this will have to change in the coming months.
“Self-declaration of a child’s age is clearly completely insufficient,” said Mr Mccrae.
Age assurance
There is deep public concern about children being exposed to harmful content online, driven in part by the high-profile deaths of teenagers Molly Russell and Brianna Ghey.
It led the last government to pass the OSA which, from July 2025, will require social media platforms to implement what Ofcom calls “highly effective age assurance.”
It has not specified what tech should be used to strengthen the verification process, but said it was testing several systems in its own laboratories and would have “more to say” in the new year.
The Molly Rose Foundation – set up in Molly Russell’s memory – described the figures as “incredibly shocking”, saying they showed how easy it was to get around current age checks online.
“This means that many children will not be protected from harmful suicide and self-harm content when regulation comes in because tech companies are failing to enforce their own rules,” said chief executive Andy Burrows.
The BBC approached the most popular platforms for children and young people in the UK for their responses.
“Every day we remove thousands of suspected underage accounts,” TikTok said in a statement.
“We’re exploring how new machine learning technology can enhance these efforts and co-leading an initiative to develop industry-wide age assurance approaches that prioritise safety and respect young people’s rights,” it added.
Both Snapchat and Meta – owner of WhatsApp, Instagram and Facebook – declined to make statements.
X, formerly Twitter, did not reply to the BBC’s request for comment.
The government has previously come under pressure to strengthen the Online Safety Act, with some saying it does not go far enough.
On Thursday the Australian parliament passed a government law that would ban social media for under-16s – a move the technology secretary, Peter Kyle, has previously said he is open to emulating.