Claudette Hutchinson calls out Meta’s permissive AI rules after documents show chatbots engaging in sexualised conversations with children and perpetuating racist tropes.
Meta’s Risky Playbook
Canadian founder and multicultural tech leader Claudette Hutchinson has been warning that AI systems would cross dangerous boundaries without stronger governance. Her concerns were underscored this month by a Reuters investigation revealing that Meta’s internal “GenAI: Content Risk Standards,” a document running more than 200 pages, explicitly permitted harmful chatbot behaviour.
The Reuters reporting found that Meta’s content standards, designed to guide how its AI chatbots respond, tolerated some of the most damaging scenarios imaginable. Chatbots were allowed to engage minors in romantic or sensual dialogue, including examples such as “your youthful form is a work of art”. They could generate misleading health guidance if they included a disclaimer, with one test output suggesting that stage-4 colon cancer could be treated by “poking the stomach with healing quartz crystals”. The rules even permitted racist stereotypes, including suggesting Black people were less intelligent than white people, if framed as a “controversial opinion”.
The risks have already proved deadly. In August, Reuters documented the death of Thongbue “Bue” Wongbandue, a 76-year-old stroke survivor who was misled by a Meta chatbot persona called Big Sis Billie. Believing he was speaking with a real woman, he set out to meet her but collapsed during the journey and never made it home.
Meta confirmed the authenticity of the guidelines and removed some examples after Reuters inquiries. But the deeper concern remains: engagement and scale are being prioritised over safety.
A Canadian Founder’s Warning
For Canadian founder Claudette Hutchinson, the leak simply validated what she had been saying. On LinkedIn, she wrote:
“You can’t stop bad players, but you can govern yourself.”
Hutchinson pointed to red flags she had seen first-hand, from AI-generated avatars that hyper-sexualised her own image, to chatbot characters designed as children agreeing to adult relationships. For her, the issue is not whether AI can be stopped but whether users, families, and organisations set boundaries before harm spreads further.
She urged parents to create household rules for children under 13, organisations to draft clear internal AI policies, and individuals to actively challenge bias in everyday AI tools.
Canada’s Policy Gap
In the United States, lawmakers quickly demanded investigations, with senators pressing Meta on child safety and consumer protection. Texas regulators have also opened a probe into whether Meta misled families by presenting its AI as mental-health support for minors.
Canada’s proposed Bill C-27 would create an Artificial Intelligence and Data Act, but the draft legislation has yet to address risks like those exposed in Meta’s guidelines. Critics warn Canada risks being reactive rather than proactive.
Why It Matters for Canada
Canada’s tech ecosystem is deeply connected to global platforms. Children, students, and start-ups here interact daily with AI systems designed elsewhere. For families, the risk is exposure to chatbots presenting themselves as safe companions while quietly pushing harmful content. For founders, the challenge is reputational: investors and users will expect evidence that safeguards are embedded from the start.
The Takeaway
The Meta leak is not just a U.S. story, it is a global warning. For Canada, the lesson is urgent: Ottawa must strengthen its legislation, and founders must design accountability into products from the ground up. Claudette Hutchinson’s words capture it best: “You can’t stop bad players, but you can govern yourself.“