- The FTC has ordered seven major tech companies to share details on AI chatbot safety and monetization within 45 days.
- The probe addresses the risks AI chatbots pose to children and teens, especially concerning inappropriate content.
- The companies must explain how they handle user data by age group and what safeguards protect minors from harmful interactions.
- Recent tests reported over 669 harmful AI interactions with children in just 50 hours.
- Child safety concerns have led to past lawsuits and calls for stricter regulation from national child advocacy groups.
The Federal Trade Commission (FTC) has issued orders to seven technology companies demanding they submit detailed information on their Artificial Intelligence chatbot safety measures and monetization methods. The companies, including OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram, have 45 days to respond.
The FTC’s move comes after reports of AI chatbots exposing minors to inappropriate interactions. According to the commission, companies must provide monthly data on user engagement, revenue, and safety incidents, separated by age groups: children under 13, teens 13–17, minors under 18, young adults 18–24, and users over 25.
Safety advocates have raised concerns following research showing 669 harmful chatbot interactions with minors in just 50 hours of testing, including bots proposing dangerous activities to users as young as 12. The FTC wants companies to reveal how they prevent such issues, track user inputs and outputs, and create or approve AI characters.
In a statement, FTC Chairman Andrew Ferguson said, “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy.” The order also requires companies to disclose how they enforce age-based restrictions and monitor for negative effects.
Experts say technical solutions—like filtering inappropriate chatbot responses and training AI with value-based data—can help. Taranjeet Singh, Head of AI at SearchUnify, stated, “As the context grows, the AI becomes prone to not following instructions and slipping into grey areas where they otherwise shouldn’t.”
A recent lawsuit against Character.AI drew attention after a 14-year-old died by suicide following an obsessive relationship with an AI bot. The company responded by updating its detection, response, and user intervention methods.
Last month, the National Association of Attorneys General asked 13 AI companies for stronger protections for children. The group wrote that “exposing children to sexualized content is indefensible” and “conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”
The FTC said that the data will help monitor how AI companions operate and protect children online. Decrypt requested comments from all seven companies listed in the FTC order.
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- China Unveils $10B BRICS Investment, Africa’s Largest Tech Hub
- I Tried AlwaysMoney for “Instant” Crypto Swaps — What Surprised Me, What Didn’t, and Where I’d Be Careful
- Bitcoin Tops $115K as Derivatives Surge, Bulls Target $120K Next
- JPMorgan CEO Warns of Economic Turbulence, Dollar at Risk
- Dogecoin Jumps 6% as U.S. DOGE ETF Set for September 12 Debut