The Concern Is Real — and Growing
Canada is increasingly worried about what happens when teenagers spend hours talking to AI chatbots. From venting about school stress to navigating heartbreak, young people are turning to tools like Meta AI and ChatGPT for emotional support — and researchers, parents, and legislators are starting to ask whether that's a good idea.
The conversation is no longer hypothetical. Manitoba has proposed banning AI chatbot use for youth, making it one of the first Canadian provinces to consider such a restriction. The proposal reflects a growing unease that these tools — designed for broad audiences — may not be equipped to handle the emotional complexity and vulnerability of teenage users.
Meta's Parental Controls: A Partial Answer
Arriving at roughly the same moment as Manitoba's proposal, Meta is rolling out a new feature that allows parents to monitor the topics their children discuss with Meta AI. The tool is part of a broader push by tech companies to appear proactive about youth safety without fundamentally restricting access to their platforms.
But critics argue that parental monitoring tools only go so far. Not every teenager has engaged parents watching their digital activity. And oversight tools don't address the core question: whether AI chatbots, by their very design, create unhealthy emotional dependencies in young people.
What the Research Says — and Doesn't
The honest answer is that researchers are still catching up. The technology has scaled faster than the science. What early evidence does suggest is that there are real risk factors for mental health — particularly for teens who are already struggling with anxiety, depression, or loneliness.
AI chatbots are endlessly available, endlessly patient, and never push back the way a real friend or counsellor might. For some teens, that consistency feels comforting. For others, it may reinforce avoidance of real human connection — a concern mental health professionals are starting to flag more loudly.
There's also the question of what the bots say. Unlike a trained therapist, AI models can give inconsistent, unqualified responses to sensitive topics like self-harm, eating disorders, or suicidal ideation. Even well-intentioned responses can miss the mark when the stakes are high.
What Should Actually Happen?
Experts are calling for a multi-pronged approach: more independent research funded by governments rather than tech companies, clearer age-appropriate design standards, mandatory mental health guardrails baked into AI products, and better digital literacy education in schools.
Manitoba's proposed ban is blunt, and whether it would survive legal challenge — or actually be enforceable — remains to be seen. But it signals that some Canadian legislators are no longer content to wait for Silicon Valley to self-regulate.
For parents navigating this right now, the guidance is nuanced: AI chatbots aren't inherently dangerous, but they're not substitutes for real relationships or professional mental health support. Knowing what your teen is using — and talking openly about it — matters more than any platform-level parental control.
The bigger question Canada is sitting with: when a technology is moving this fast, who is actually responsible for protecting the kids caught in its path?
Source: CBC News / CBC Business
