Skip to content
canada

N.L. Premier Vows to Tighten AI Policy After 6-Fingered Woman Photo Slip

Newfoundland and Labrador is under scrutiny after an AI-altered image made it onto an official government Facebook page. Premier Tony Wakeham says it's time to enforce stricter rules around how artificial intelligence is used in public communications.

·ottown·3 min read
N.L. Premier Vows to Tighten AI Policy After 6-Fingered Woman Photo Slip
130

A Telling Slip-Up

When a public service announcement from the Newfoundland and Labrador government landed on Facebook recently, most people probably gave it a passing scroll — until someone noticed the woman in the image had six fingers on one hand.

It's the kind of detail that's easy to miss at first glance, but impossible to unsee once you spot it. And it's become the latest example of what happens when AI-generated imagery gets used without a careful second look.

Premier Calls for Tighter Oversight

Newfoundland and Labrador Premier Tony Wakeham didn't mince words after the image surfaced. He acknowledged the government needs to "tighten up" its approach to AI use and pledged to make sure existing policies are actually being enforced.

The incident highlights a growing tension in public sector communications: AI image tools are fast, cheap, and increasingly convincing — but they're also prone to the kind of anatomical glitches (extra fingers, warped hands, uncanny backgrounds) that can undermine credibility fast.

"The technology moves quickly, and clearly our oversight hasn't kept pace," is the message Wakeham is implicitly sending, even if the conversation is uncomfortable.

Why This Keeps Happening

AI image generators like DALL-E, Midjourney, and Stable Diffusion have made it trivially easy for anyone — including government communications teams — to produce polished-looking visuals in minutes. The problem is that these tools still routinely struggle with hands, fingers, and fine anatomical details.

For a government department working under deadline pressure and budget constraints, the temptation to grab a quick AI image instead of licensing stock photography or commissioning original work is understandable. But the trade-off in public trust can be significant.

When a government PSA — meant to reassure or inform the public — turns out to contain a quietly surreal AI artifact, it raises questions that go beyond the image itself: Who approved this? Does anyone check? What else is being generated and posted without proper review?

A Teachable Moment for Public Institutions Across Canada

The N.L. incident isn't an isolated one. Public institutions, news outlets, and brands across Canada and beyond have faced similar embarrassments as AI content enters the mainstream. The common thread is usually the same: insufficient review processes before publication.

Government communications teams are now being pushed to develop clear frameworks for AI use — not just banning or allowing it wholesale, but specifying when it's appropriate, what review steps are required, and who holds accountability when something slips through.

Premier Wakeham's response signals that the conversation is moving beyond "should we use AI?" to the more practical "how do we use it responsibly?" — a shift that other provincial governments will likely be watching closely.

What Good AI Policy Looks Like

Experts in digital communications suggest a few baseline practices: mandatory human review of all AI-generated images before publication, clear internal tagging of AI-assisted content, and regular staff training on how to spot common AI artifacts.

Six fingers on a PSA might seem like a small thing. But in an era when public trust in institutions is already fragile, getting the basics right matters more than ever.

Source: CBC News

Stay in the know, Ottawa

Get the best local news, new restaurant openings, events, and hidden gems delivered to your inbox every week.