Skip to main content

Tiktok, the popular video sharing app, has been banned in Italy after the tragic death of a young girl who copied a dangerous trend she saw on the platform. MEF CEO Dario Betti examines how the app’s failure to verify and enforce its own rules around user age reflect a difficult challenge for the industry.

The Italian Authority for the Protection of Personal Data blocked Tik Tok, the social network in January 2021 after the death of a 10-year-old girl emulating a suffocation post found on the network. In December 2020, the authority had already criticized the social network for a series of violations including neglecting to monitor the minimum age of users (only users older than 13 should be allowed according to Tik Tok’s own terms and conditions).

The block should have lasted until the 15th of February 2021, but the company announced a combination of age verification solutions to be put in place, Tik Tok will be accessible to Italian users again from the 9th of February. The Authority will review the efficacy of the solutions.

Tik Tok – looking for age gating solutions

First, the social network will implement a self-declaration system. In this scenario, users will input their birthday and under 13’s will be stopped from registering a profile. This seems an exceptionally low barrier as minors could still lie about their age. However, Tik Tok has also committed to introducing an AI verification system that will monitor their age by analyzing the communication patterns and posts.

There are no details on the solution yet, but the Irish Authority (the European branch of ByteDance, the owner of TikTok is legally based in Ireland) will now have to review the privacy implications of such an AI solution (Members of MEF can join the AI Framework Roundtable for more details).

Tik Tok is hardly the only messaging service to fail scrutiny of age-verification. The problem cuts across a large number of apps and platforms. In fact, the Italian Authority opened a further investigation for Facebook and Instagram ten days after the case on Tik Tok.”

Tik Tok is not alone

Tik Tok is hardly the only messaging service to fail scrutiny of age-verification. The problem cuts across a large number of apps and platforms. In fact, the Italian Authority opened a further investigation into Facebook and Instagram ten days after the case on Tik Tok.

By then the Italian media had reported that the girl that died had also profiles on Instagram and Facebook as well. The two social networks have got 15 days to explain to the Authority how a minor might have had access to their services.

Why do apps include got age limitations?

The question of age blocking or age verification is key for many services. In certain countries there are digital services only accessible by adults by law; these could include tobacco, alcohol, gambling, or adult services. However, specific requirements are also in place for many services that are accessed by minors, and messaging or social platforms would generally be part have of these. In fact, most of the messaging apps have got clear age limits in their own terms and conditions.

These were often used as legal protection for the messaging app rather than a real enforced measure to protect consumers. By stating in the terms and conditions that the services are only available for users over a certain age the service providers can avoid the strict requirement that can be requested in certain territories, such as content moderation or content vetting.

Another case of missing digital identity

The core issue of the Internet remains its lack of reliable identity services. This is an issue often addressed by MEF Personal Data and Identity Workgroup.

There is some good news: solutions are in place even if sometimes they are not effective or not widely available. A willing service provider will still find an answer in most markets, and in some, they might even find a robust vetting system at scale with seamless customer flows. Let’s be clear, these markets are the exceptions.

The availability of a commonly accepted and actual digital identity solution is still the missing piece for the global digital economy. It is in the interest of the industry and the regulators to have a reliable identity service in place to protect minors and to support digital innovation.

The Age Gating solutions available

Age gating solutions are in place, but they are often based on an honor system (the user confirms to be an adult), these are hardly gating services, especially for young kids. Credit card ownership is also used to determine age, but equally, in many countries many adults would not own a credit card, blocking many from access. More successful examples include federated identity and facial verification recognition.

Some governments are providing digital identity solutions, fewer open their services to non-governmental services. Federating identities are becoming more common. There are in some instances federated age verification services that have been used by the porn industry – AgeID is the solution used by the Canadian company MindGeek in countries with age limitations. Here users get a login and password combination to prove their identity after a check by an affiliated partner (either online check or face to face).

Biometrics solutions promise to mix the physical and digital world: facial recognition is used to match the image of a user with a physical document. iProov‘s genuine presence and liveness solutions enable face verification against government-trusted documentation on smartphones and computers. The advantage of iProov technology, which is used by the US Department of Homeland Security and the UK National Health Service, is that it verifies the identity of the person (are they the right age?) with the assurance that the person attempting to gain access is genuinely that person (and not a child using a photo of an adult to gain access, for example).

Dario Betti

MEF CEO

  

MEF