Social media companies have been slow to grasp the thorny issue of kids using their platforms. YouTube’s latest announcement of new measures suggests a much-needed change in approach, and will hopefully result in concrete actions with other platforms following suit, says Rebecka Allén
Share this post
It’s no surprise to anyone that kids use social media a lot. That despite many social media platforms stating a legal requirement to be 13 years or over to access their services.
By the age of 12, half of children in the UK have social media profiles according to Ofcom's 2018 research. In France, a poll showed that roughly half of 11-12 year-olds have a Facebook account.
When it comes to viewing platforms, YouTube is number one. In a Pew Research Center poll last year, more than 80 percent of US parents with children younger than 12 said they let their children watch YouTube. According to the latest OfCom data, YouTube is clearly becoming the viewing platform of choice, with rising popularity particularly among 8-11s in the UK. PwC estimates that the platform will account for 25% of kids digital ad spend by 2021.
Given the legal requirements, Facebook and Google have been dancing around this issue for years. Data protection regulations such as the Children’s Online Privacy Act in the US (COPPA) and the EU’s General Data Protection Regulation (GDPR) have made it hard for them to formally acknowledge that children use their platforms – ridiculous though this sounds. But the whole situation is increasingly unsustainable.
A welcome change in tone came on September 4 when YouTube announced new data practices for children’s content. The announcement followed allegations from the US Federal Trade Commission (FTC) and the New York Attorney General that the platform had breached COPPA rules by collecting children’s personal information without parental consent for ad-targeting purposes. The allegations were settled on with a $170 million fine accompanied by additional measures which YouTube must now take.
This fine might well be the largest COPPA fine to date, topping the one handed to TikTok last February, but implementing the measures will likely be far be more costly to its parent company, Google, which makes well over $100bn per annum from ad revenues. YouTube has agreed to completely deactivate data collection and targeted ads from around children’s content, whether the user be an adult or a child.
The question on advertisers’ lips is whether YouTube will finally offer a clear classification of kids’ content so that platforms, content owners and advertisers can unequivocally ensure compliance.
Up until now, the classification “Content Suitable for Families” has represented a loose indication (albeit not a public acknowledgement) of child-targeted content on YouTube. And there has been no requirement to get parental consent for data capture around this content classification. We look forward to seeing this fundamental problem resolved as YouTube rolls out its new policy over the next four months.
Of course, the issue of children’s data is just one of many challenges facing social media platforms. Online ad fraud, terrorist content, fake followers and data breaches have all grabbed the headlines of late. But we’ve witnessed many times how the sensitive (and potentially highly political) issue of children can spark flames which capture the mainstream public consciousness. Children’s protection must be the absolute number one priority for all platforms and responsible brand owners.
As the COPPA saga continues, other platforms – including Facebook - will need to fundamentally review the way that they welcome children and collect their data. It’s time for the whole online advertising ecosystem to put their responsibilities ahead of revenues.