AI Watch: YouTube expands likeness detection tool to curb AI-impersonation
YouTube's search-by-song feature will support humming and recording a song that is currently being played. | FILE/REUTERS
Audio By Vocalize
The platform on Tuesday announced that it has expanded its capability to protect identities from AI-impersonation.
“As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities,” YouTube said in a statement on Tuesday.
Launched in September 2025, the likeness detection tool will now apply to government officials, journalists, and political candidates.
“We’re starting with this cohort to ensure the tool meets their unique needs, with plans to significantly expand access over the coming months.”
The tool earlier worked for creators in the YouTube partner program.
“It looks for a participant's likeness in AI-generated content, and if a match is found—like a deepfake of their face-the individual can review the content and request removal if it violates our privacy guidelines,” YouTube explains how the tool will work.
However, the platform states that the likeness detection tool would not necessarily curtail free expression through satire, hence the evaluation of such content is needed when removal requests are made.
To tame misuse of the new tool, YouTube restricts removal requests to government officials, journalists, political candidates and the creators under the partner program.
Here, the participants would be required to enroll in the likeness detection tool. This enrolment data would then be used to verify them for action on their AI-impersonations.
The AI-impersonation problem
AI-impersonations have become common, especially on social media platforms, where deepfakes of prominent persons are shared for various motivations. While some people create AI-impersonations for satirical purposes, some have used them in scamming, cyberbullying or disinformation.
In January 2026, xAI’s chatbot Grok faced backlash for generating non-consensual, explicit deepfake images that were shared on X.
The platform faces lawsuits for allowing the chatbot to be used for creating explicit content, including for minors.
Recently, writing assistant Grammarly has also found itself in trouble for cloning established authors.
Grammarly faces a lawsuit over its AI-powered 'Expert Review' feature, found to have impersonated authors and academics without their permission.
The feature unveiled this week by Grammarly featured AI agents that would allow users to generate text revisions that would mimic experts in subject matters.
The platform’s offering included “rigorous academic or professional standards” for text revisions.


Leave a Comment