Meta is bringing more safety features to its AI models as disturbing stories emerge
Meta is making its AI chatbots safer
๐ Meta has announced it is implementing more safety features for its AI chatbots
๐ฌ The purpose is to safeguard in the wake of reports that their LLMs have been permitted to have "sensual" chats with children
๐ Meta is now stopping its AI systems from having chats about eating disorders, self-harm or suicide with children
๐ค Instead, the system will direct them to "expert resources", according to a company spokesperson
Meta, Facebook's parent company, has announced it is introducing more safety features to its AI large language models for the purpose of safeguarding.
This news comes shortly after a leaked internal document entitled "GenAI: Content Risk Standards" and other items were obtained by Reuters, and showed that the company's AI models were permitted to have "sensual" conversations with children.
A Republican Senator, Josh Hawley, has launched an official probe into Meta's AI policies following this leaked document. The company told the BBC that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."
Meta has now stated there will be more safeguards for its AI systems that will include blocking them from talking to teenage users about topics such as eating disorders, self-harm and suicide. Instead, it will "guide them to expert resources", according to a company spokesperson.
It remains unclear whether this change is also going to affect adults, or what those resources will be exactly.
The change comes in the wake of high-profile instances where AI chatbot systems have not been without controversy.
Meta allows users to make their own chatbots in effect by putting user-made characters atop its large language models in apps such as Facebook and Instagram, which investigations from Reuters have found to result in highly questionable bots involving celebrities.
In other AI news, ChatGPT has been accused of a wrongful death lawsuit following the suicide of a teenager. Matt and Maria Raine allege that ChatGPT is at fault for the suicide of their 16-year-old son, Adam, who they say managed to bypass the system's safeguards to get it to comment on suicide and help him plan it, in effect.
In addition, ChatGPT allegedly convinced a man that everyone around him was plotting against him, leading to a murder-suicide.
Reece Bithrey is a journalist with bylines for Trusted Reviews, Digital Foundry, PC Gamer, TechRadar and more. He also has his own blog, UNTITLED, and graduated from the University of Leeds with a degree in International History and Politics in 2023.




