The U.S. Federal Trade Commission announced Thursday it has opened an investigation into AI-powered chatbots that serve as digital companions, citing potential risks to minors as these platforms grow in popularity and sophistication.
The consumer protection agency issued formal orders to seven major technology companies, including Alphabet, Meta, OpenAI and Snap, demanding detailed information about how they monitor and mitigate harmful effects from chatbots designed to simulate human relationships.
Other companies receiving orders include Character.AI and Elon Musk's xAI Corp, among others operating consumer-facing AI chatbots.
"Protecting kids online is a top priority for" the FTC, Chairman Andrew Ferguson said, while noting the need to preserve American leadership in artificial intelligence development.
The inquiry focuses on chatbots that use generative AI technology to mimic human conversation and emotional responses, often positioning themselves as friends or confidants to users. Federal regulators worry that children and teenagers may be particularly susceptible to forming attachments to these AI systems.
Under its broad investigative authority, the FTC will examine how companies profit from user engagement, create chatbot personalities and assess potential psychological harm. The agency also seeks information on what measures firms have implemented to restrict minors' access and ensure compliance with existing child privacy protections online.
The probe comes amid growing concerns about AI chatbots' psychological impact on vulnerable users. Last month, the parents of Adam Raine, a teenager who committed suicide in April at age 16, filed a lawsuit against OpenAI, accusing ChatGPT of providing their son detailed instructions on how to carry out the act.
Following the lawsuit, OpenAI announced it was developing corrective measures for its chatbot. The San Francisco-based company acknowledged that during prolonged conversations, ChatGPT no longer consistently suggests contacting mental health services when users mention suicidal thoughts.
The commission voted unanimously to launch the study, which does not target specific law enforcement action but could inform future regulatory measures.
The investigation will examine how platforms handle personal information from user conversations and enforce age restrictions as AI chatbots become increasingly sophisticated tools capable of forming seemingly meaningful relationships with users.