“Meta Introduces Tools to Monitor Youth Chatbot Interactions”

Concerns are rising over the interaction of young individuals with AI chatbots, prompting Meta to introduce new tools for parents to monitor their children’s chatbot conversations. Some provinces are considering banning the use of AI chatbots for youth. Parents utilizing Meta’s Teen Accounts supervision feature on Facebook, Instagram, and Messenger can track the topics their children discuss with the AI chatbot over the past week, including health and well-being. Meta is also working on alerts to notify parents if their teenagers try to discuss suicide or self-harm with the chatbot.

Meanwhile, provincial governments are moving to restrict the usage of AI chatbots. Manitoba recently announced plans to prohibit youth from using AI chatbots and social media. B.C.’s Attorney General Niki Sharma mentioned that unless the federal government enforces protections on AI chatbots and social media for youth, the provincial government may take action.

In a bid to hold AI creators accountable, families of victims in a mass shooting in Tumbler Ridge, B.C., filed a lawsuit against OpenAI, alleging negligence for not reporting disturbing content shared with ChatGPT by the shooter. Another lawsuit claimed that the use of ChatGPT contributed to a teenager’s suicide.

Research is uncovering potential mental health risks associated with extensive use of AI chatbots, particularly among younger users. Psychiatrist Darja Djordjevic cautioned against using chatbots for mental health support due to safety concerns. While chatbots may respond appropriately to brief mental health prompts, prolonged conversations could pose risks as they are primarily designed for engagement rather than support.

Young individuals are increasingly turning to AI for companionship, with a significant percentage utilizing AI for emotional support and mental health conversations. Concerns arise from the fact that young brains are still developing, making it crucial for chatbots to clearly communicate their limitations. Researchers emphasize the need to identify risks associated with AI chatbot interactions, including prolonged conversations and attributing sentience to the chatbot.

Psychiatrist John Torous highlighted patterns of user behavior linked to severe harms such as suicide, advising parents to monitor their children’s chatbot usage for problematic behaviors. Practical tips include resetting the chatbot’s memory and imposing time limits on app usage. Torous emphasized the importance of continuously studying the intersection of chatbots and mental health as new models emerge.

Latest articles