If a child texts about enjoying “peri peri” or “coleslaw”, parents may be unnerved to discover they might not be talking about a family meal out.
An internet safety service that has monitored the online interactions of more than 50,000 children has discovered that girls as young as 10 are using code words drawn from the Nando’s restaurant menu to obscure explicit sexual conversations.
SafeToNet has screened more than 65m texts sent since November and found that girls aged 10, rather than teenage boys, as they had expected, use the most explicit and potentially harmful sexual language.
“We weren’t expecting to see that,” said Richard Pursey, the founder and chief executive of the service, which monitors popular messaging apps including WhatsApp and Facebook Messenger as well as Instagram and Snapchat. “We thought it would be more likely to be boys than girls and in the 12 to 13 age group.”
As well as overtly graphic terms, they use “peri peri” to mean a well-endowed male and “coleslaw” to mean a bit on the side, he said.
The SafeToNet app looks for language indicating sexual talk, abuse, aggression and thoughts about suicide and self-harm. It applies a threat level to each and 10-year-old girls were the most prominent in category 3 of sexual references, which relates to the most explicit and harmful language.
In December, it emerged that more than 6,000 children under 14 have been investigated by police for sexting offences in the past three years, including more than 300 of primary school age.
Pursey said: “We don’t think it is as sinister as it seems. We think it is a rite of passage and is related to that rather than actual sexual activity.” He said the high incidence of sexual language appeared to coincide with girls texting in large groups of other girls.
SafeToNet also found that while girls in general use more sexually explicit language than boys, boys are more abusive and aggressive, and children fear bullying the most on a Sunday evening.
The analysis provides a window into the often hidden online lives of eight- to 16-year-olds. Half of 10-year-olds have a smartphone and ownership doubles between the ages of nine and 10, according to the regulator, Ofcom.
Parents’ concerns about how social media may trigger self-harm have risen since the death in 2017 of Molly Russell, who killed herself aged 14 after viewing posts about self-harm and suicide on Instagram. Almost half of parents of children aged five to 15 are concerned about their child seeing content that could encourage them to harm themselves, the regulator found.
The app screens children’s texts and warns them when they are engaged in risky online behaviour, sometimes blocking their device from sending a text. It provides parents with a report about the level of risky language their children are using but does not reveal what they wrote.
Watchdog cracks down on tech firms that fail to protect children
Read more
As worrying as the findings may be for parents, there was a glimmer of hope in that when children spend more time with their families and screen time drops, so does some risky behaviour. “Saturdays are very busy for families and we can tell that on Saturdays the aggression drops,” Pursey said.
SafeToNet employs a team of linguists and psychologists specialising in online behaviour to programme the algorithm that screens texts. It uses artificial intelligence to contextualise what users are typing so it only flags phrases if they are being used in a way that indicates potentially harmful behaviour.
If someone wrote “Raheem Sterling killed it last night against Real Madrid”, there would be no warning but if someone wrote “Go kill yourself” the screen would flash red and it would not allow the user to send the message.
The system notices patterns that could indicate risk. Rapid exchanges of short texts can indicate bullying or sexual dialogue. It also picks up on “leeting”, the tactic of adjusting spellings so “hate” becomes “h8” and “awesome” becomes “4W3S0M3”.
A message that calls someone an “idiot” could flash amber to warn the sender that it might not be wise to send it. Worse language may trigger a red light and block its dispatch. “It is trying to educate the child in real time,” Pursey said.
The app is installed on children’s phones with their knowledge. It works by overlaying its own keyboard on whatever social media apps children are using in order to monitor what they are writing. Using an algorithm, it feeds back to the user in real time if what they are typing is considered risky, using colour coding. Parents do not get to find out what their children are writing, but are instead provided with a risk score.
SafeToNet says the app focuses on sexual language and patterns of behaviour such as sexting; abuse and aggression and notably bullying; issues of low self-esteem and notably dark thoughts, anxiety and stress.
Those aged 10 and 11 account for about 35% of the children on the platform.
Свежие комментарии