YouTube Opens AI Deepfake Scanner to Everyone
YouTube has announced it is expanding its AI-powered likeness detection program to all users over the age of 18 — meaning virtually anyone on the platform can now use it to hunt for deepfakes of themselves.
The tool works through a selfie-style facial scan. Users submit a photo of their face, and YouTube's AI then combs through the platform looking for videos that appear to feature a lookalike. If the system finds a potential match, YouTube alerts the user, who then has the option to request that the content be removed.
From Creators to the General Public
YouTube didn't roll this out to everyone overnight. The company began testing the feature with content creators — people most at risk of having their likeness used without consent — before gradually widening access to government officials, politicians, and journalists. Now, that circle has expanded to include all adult users.
The move reflects growing pressure on major platforms to take non-consensual AI-generated imagery (NCAI) seriously. Deepfake technology, which can convincingly swap or synthesize faces in videos, has become increasingly accessible, raising concerns about harassment, misinformation, and identity-based abuse.
How Big Is the Problem, Really?
YouTube has noted that the number of removal requests generated through the program has been "very small" — suggesting that either the tool isn't finding many matches, or that most users who receive alerts choose not to act on them. Still, the company's decision to offer the tool broadly signals a recognition that the threat is real enough to warrant platform-wide protection.
For context, the rise of generative AI tools has made it easier than ever to create convincing fake videos featuring real people. While much of the public discourse has focused on celebrity deepfakes or political misinformation, everyday users can also be targeted — particularly in cases involving harassment or revenge-based content.
What Happens When a Match Is Found?
When YouTube's system flags a potential likeness match, users receive a notification. From there, it's up to the individual to decide whether to submit a formal removal request. YouTube then reviews the flagged content under its existing policies around manipulated media and synthetic content.
The process is designed to give users agency without requiring them to manually search the platform themselves — something that would be nearly impossible at YouTube's scale, where hundreds of hours of video are uploaded every minute.
A Broader Industry Push
YouTube's expansion comes as regulators in multiple jurisdictions — including the European Union and several U.S. states — are pushing for stronger rules around synthetic media. Canada, too, has been examining how existing privacy and defamation laws apply to AI-generated content, with calls for federal legislation growing louder.
For now, YouTube's approach is voluntary and reactive: users have to opt in to be scanned, and removal only happens after a request is made. Critics argue that a more proactive approach — automatically flagging or labelling synthetic content at upload — would offer stronger protection. But for millions of users who never thought they'd need to worry about deepfakes, having a tool at all is a start.
Source: The Verge
