Deepfake ads are flooding India and brands are not ready
90% of Indians have seen fake AI celebrity endorsements. Deepfake ad scams cost victims ₹34,500 on average. India's new regulations and what brands should do now.
Somewhere right now, a convincing video of Shah Rukh Khan is promoting a cryptocurrency scheme he has never heard of. An AI-generated Alia Bhatt is endorsing a skincare brand she has no connection to. And a deepfake Elon Musk, speaking fluent Hindi, is promising guaranteed stock returns to viewers who click a very specific link.
This is not a hypothetical scenario. It is happening at industrial scale across India’s digital advertising ecosystem. And the numbers are staggering.
The scale of the problem
According to a McAfee survey, approximately 90% of Indians have encountered fake or AI-generated celebrity endorsements online. That is not a niche problem. That is a systemic failure of the digital advertising environment that nearly every internet user in the country navigates daily.
The financial damage is real and measurable. Victims of deepfake celebrity endorsement scams lose an average of ₹34,500 per incident, according to Business Standard’s reporting. Scale that number across the hundreds of millions of Indian internet users who encounter these scams, and you are looking at one of the largest consumer fraud categories in the country.
The growth trajectory is equally alarming. Deepfake content globally has increased by 900% in recent years, according to Rediff. In India specifically, 65% of organisations experienced deepfake-driven attacks in 2025, with 55% suffering reputational damage from AI-generated misinformation. The tools that create these deepfakes are becoming cheaper, faster, and more accessible. A convincing audio deepfake can now be generated from as little as three seconds of source audio.
And here is the detail that should alarm every brand manager in India: these scams are not amateur productions. They run as paid advertisements on legitimate platforms — Facebook, Instagram, YouTube — using the same ad infrastructure that legitimate brands use. The scammer’s deepfake ad sits in the same feed, uses the same targeting tools, and competes in the same auction as your real brand campaign.
Why this is a brand safety crisis, not just a consumer protection issue
It is tempting to treat deepfake ads as purely a law enforcement problem. Scammers scam; regulators regulate. But for brands, the implications go far deeper than that.
Brand contamination. When a deepfake uses a celebrity who has a legitimate endorsement deal with your brand, the scam damages the credibility of every real ad featuring that celebrity. If consumers have seen three fake Shah Rukh Khan endorsements this week, they are less likely to trust the real one. The trust erosion we documented in AI-powered advertising extends directly here — consumers increasingly cannot distinguish real ads from AI-generated fakes, and 69% already feel manipulated by undisclosed AI in advertising.
Platform liability gap. The platforms where these deepfake ads run — Meta’s Facebook and Instagram, Google’s YouTube — are the same platforms where brands spend billions on legitimate advertising. But the platforms’ AI-powered ad review systems are demonstrably failing to catch deepfake ads before they reach consumers. The irony is bitter: the same AI that powers automated ad creation through Advantage+ and AI Max apparently cannot reliably detect when an ad is a deepfake scam.
Legal exposure for brands. Here is a scenario most brand lawyers have not fully considered. A deepfake ad uses your brand logo alongside a fake celebrity endorsement. A consumer is defrauded. Under India’s consumer protection framework, the consumer may pursue action against the brand whose logo appeared in the ad — even if the brand had nothing to do with it. The reputational and legal cost of proving non-involvement can be substantial.
The Advertising Standards Council of India (ASCI) has specifically flagged deepfake advertisements featuring prominent individuals endorsing products without consent. But ASCI’s enforcement mechanisms are voluntary and reactive. By the time a complaint is investigated, the deepfake ad has often already run its course, reached millions, and caused its damage.
India’s new deepfake regulations: What actually changed
India’s regulatory response in 2026 represents a significant escalation, though it remains to be seen whether enforcement will match the ambition.
Mandatory labelling. Platforms are now required to clearly label AI-generated or manipulated content using visible tags, disclaimers, or embedded metadata. This applies to both user-generated content and paid advertising. The intent is transparency — consumers should know when they are watching AI-generated material.
Three-hour takedown mandate. Social media companies must remove unlawful deepfake material within three hours of receiving official directives. This is a dramatic reduction from previous timelines and reflects the government’s recognition that deepfake content goes viral within minutes, not days. Whether platforms can operationally deliver on this commitment at scale remains an open question.
Platform safe harbour at risk. Failure to comply with the updated norms can result in platforms losing their legal protections under India’s IT laws. This is the most significant lever — it transforms platform accountability from a reputational concern into an existential legal risk.
DPDP Act enforcement. India’s Digital Personal Data Protection Act, which we covered in the context of AI advertising automation, directly applies to deepfakes. Using someone’s likeness without consent constitutes unauthorised processing of personal data, punishable with fines up to ₹250 crore. The Act empowers individuals with rights to data correction, erasure, and consent withdrawal — all directly relevant to deepfake victims.
The regulatory trifecta — IT Act amendments, DPDP Act, and Bharatiya Nyaya Sanhita (BNS) 2023 — creates a framework where deepfakes used for defamation, impersonation, or commercial deception are clearly illegal. The gap is not in the law. It is in the enforcement.
How this compares to global approaches
India’s approach focuses primarily on regulating the aftermath — takedowns, penalties, and liability. Other jurisdictions are tackling the problem from the creation side as well.
The EU AI Act, which comes into full effect in August 2026, categorises certain AI advertising applications as “high-risk” and mandates that advertisers clearly identify AI-generated content. The No Fakes Act in the US specifically protects individuals from unauthorised digital replicas. New York has already enacted legislation requiring “conspicuous disclosure” for synthetic performers in advertisements.
The UK’s Advertising Standards Authority has gone further on enforcement, deploying an AI-powered Active Ad Monitoring System to proactively review 40 million advertisements in 2026. India has no equivalent proactive monitoring system.
This enforcement gap matters for Indian advertisers. If you are running campaigns in India AND the EU, your compliance requirements are already divergent. A creative that is legally acceptable in India — where labelling enforcement is still developing — may violate the EU AI Act’s transparency obligations. Brands operating across markets need a single, consistent policy that meets the highest standard.
What brands should do right now
1. Implement continuous deepfake monitoring
Deploy or contract AI-powered brand monitoring tools that specifically scan for unauthorised use of your brand assets — logos, product images, celebrity ambassadors, and brand colour schemes — across major ad platforms. The cost of monitoring is a fraction of the cost of a successful deepfake campaign running unopposed for days or weeks.
2. Register your brand assets with platform protection programmes
Both Meta and Google offer brand protection tools that allow rights holders to flag and expedite removal of infringing content. These tools are underutilised by Indian brands. Given the three-hour takedown mandate, having pre-registered assets gives you a significant advantage in response time.
3. Brief your celebrity ambassadors and their legal teams
If your brand uses celebrity endorsements, ensure the celebrity’s team is aware of deepfake risks and has a protocol for identifying and reporting fake endorsements. A coordinated response — brand, celebrity, and platform — is far more effective than isolated complaints.
4. Audit your own AI-generated creative for compliance
As AI automation takes over more of the creative process, your own ads may increasingly contain AI-generated elements. Under the new regulations, these require clear labelling. Audit your creative workflows to ensure every AI-generated asset — image, video, copy — is properly disclosed.
5. Build first-party audience trust as a competitive moat
In an environment where consumers cannot trust the ads they see in their feeds, brands with strong first-party relationships — email subscribers, app users, loyalty programme members — have a structural advantage. These audiences already trust the brand. They are less susceptible to deepfake scams, and they do not depend on platform-mediated discovery.
This connects directly to the first-party data strategies we covered earlier. In a deepfake-polluted advertising environment, the brands that own their audience relationships are the brands that survive.
What this really means
The deepfake advertising crisis in India is a symptom of a broader problem: the digital advertising infrastructure was built for speed, scale, and automation — but not for trust. The same AI that enables automated ad creation, conversational commerce, and AI-powered search also makes it trivially easy to create convincing fake advertisements at scale.
India’s regulations are a start. But regulation alone will not solve the problem. Platforms need to invest in detection systems that match the sophistication of the deepfake tools. Brands need to invest in monitoring and trust-building. And consumers need to be educated about what they are seeing.
The 90% exposure rate tells us something uncomfortable: deepfake advertising is not a fringe problem happening on the edges of the internet. It is the mainstream advertising experience for nearly every Indian internet user. And until the platforms that profit from ad spend take responsibility for ensuring those ads are real, the burden falls on brands to protect themselves — and their customers.
Tags
Related Stories
Google Performance Max is taking over. Advertisers are not happy about it
Performance Max puts Google's AI in charge of your entire ad budget across every Google channel at once. Here is what that actually means for brands — and how to stay in control.
Read More
TikTok got banned in the US. Here is where the ad money went
When TikTok went dark in January 2026, brands had to move fast. The winners were obvious. The costs were not.
Read More
Influencer marketing is growing fast. Nobody knows if it actually works
Brands are pouring billions into influencer deals. 57% of marketers still cannot measure the ROI. That is a very expensive guess.
Read More
Leave a comment