The Federal Trade Commission (FTC) has launched an inquiry into AI companion chatbots over child safety, privacy, and monetization risks. Regulators want to know how companies design, monitor, and profit from these tools, and whether safeguards protect minors. Industry views differ between risk and innovation potential. MEF CEO Dario Betti explores how ecosystem players must align on trust, compliance, and safeguards.
The Federal Trade Commission (FTC) opened an inquiry into AI-powered chatbots designed to act as companions on September 11, 2025. These tools, which make use of generative AI to simulate conversational intimacy, are increasingly being promoted as friends, coaches, or confidants.

The concern from the USA regulator is that these systems may have too much influence on vulnerable users, particularly children and teens, who are more likely to place trust in machine-generated interactions presented as human-like.
The Commission has issued information requests to seven major companies: Alphabet, Character Technologies, Instagram, Meta Platforms, OpenAI, Snap, and X.AI. The inquiry is not an enforcement action, but rather a fact-finding study focused on industry practices. Essentially the Commission wants to understand how these companies design, monitor, and monetize their products, and what safeguards they have implemented to protect younger users.
The scope of the inquiry is wide-ranging. The FTC has asked for details on how user conversations are processed, how characters are created and approved, what methods companies use to test for possible harms, and how they monitor for risks both before and after deployment. Regulators also want clarity on whether the companies disclose risks to users and parents, how they restrict or manage access by minors, and to what extent they comply with the Children’s Online Privacy Protection Act. Another focus is monetization. By asking companies to explain how they profit from user engagement, the Commission is signaling concern that business incentives may be driving longer and potentially more manipulative interactions with young audiences.
The concern from the USA regulator is that these systems may have too much influence on vulnerable users, particularly children and teens, who are more likely to place trust in machine-generated interactions presented as human-like.”
The industry response so far illustrates that tension. Advocacy groups concerned with children’s rights and online safety have largely welcomed the inquiry. Many see companion chatbots as posing greater risks than traditional social media, since they invite more personal disclosures and can simulate an emotional bond that may not be in the user’s interest. Privacy experts argue that the data collected through these systems is especially sensitive, reflecting life struggles, anxieties, or insecurities, and therefore requires stronger safeguards against misuse.
At the same time, some industry voices are worried that regulatory pressure might slow an area of development with potential social value. Companion chatbots are being explored not only for entertainment but also for education, eldercare, and mental health support. From their perspective, the risk is that heavy regulation could discourage new entrants and leave only the largest firms with the resources to comply. Market analysts, however, suggest that regulation could ultimately increase trust in AI services, improving adoption where strong protections are demonstrated.
For the broader mobile ecosystem, this inquiry carries implications that extend beyond the companies directly named. Companion AIs are distributed through the same infrastructure of app stores, messaging channels, and devices that underpin other parts of the digital economy. This means the expectations around accountability will eventually touch operators, integrators, vendors, and developers. These actors may not design chatbots themselves, but they play a central role in enabling access and monetization.
Trust and compliance, therefore, will increasingly function as differentiators. Mobile operators and messaging providers who can demonstrate robust measures to support safe deployment will be better positioned as partners for AI developers and regulators alike. The way ecosystem players handle billing, parental controls, or content classification could become a reference point when regulators assess whether services are responsibly structured.
The inquiry’s attention to monetization models also signals a shift. If regulators find that engagement-driven revenue creates risks for children, they may impose constraints on such models. That outcome could have knock-on effects for mobile monetization practices more generally, particularly for companies that rely heavily on behavioral data or prolonged engagement as primary revenue drivers.
There are also parallels with existing messaging regulation. Digital ecosystems have already had to grapple with issues like grey route traffic, fraud prevention, and the need to establish clear trust frameworks. Those precedents may provide useful lessons for AI companion models. If regulators decide that companion characters or certain conversation contexts are higher risk, operators and vendors may need to build systems that flag or restrict such traffic, similar to how mobile players have adapted to fraud detection requirements.
There is also a wider responsibility to frame this discussion constructively. Companion AIs have the potential to deliver positive social value, but only if developed with foresight and adequate safeguards. Finding the balance between innovation and protection is not simply the job of regulators; it requires active engagement from across the ecosystem. Industry bodies like MEF can play a role in convening stakeholders, promoting principles such as consent-by-design, transparency, and authentication, and ensuring that mobile operators, messaging providers, and developers align around common standards.
The FTC inquiry should be seen as part of a longer process rather than a one-time intervention. Its findings will shape both the conversation within the U.S. and the broader trajectory of how companion AI is developed and governed. For the mobile ecosystem, the lesson is that responsibility does not begin or end with the chatbot developers themselves. It is shared across the value chain. By helping to define trust standards and ensuring compliance in distribution, monetization and safeguarding, the mobile industry can show leadership in building a sustainable environment for these new technologies.
In that sense, the development of AI companions is not just about software. It is about the systems, partnerships, and rules that allow innovation to deliver benefits without undermining user trust. The FTC’s inquiry is a reminder that ecosystem players must think proactively about their role in that balance.