ThoughtsOfMuskan

Black box ads: Google and Meta want your budget, not your input

Google AI Max and Meta Advantage+ strip advertiser control over targeting, creative, and bidding. What the black box advertising shift means for marketers in 2026.

Muskan Verma
·9 min read
Black box ads: Google and Meta want your budget, not your input

Here is the pitch both Google and Meta are making to advertisers in 2026: give us a URL, a budget, and a business goal. We will handle everything else — the creative, the targeting, the bidding, the placements, the optimisation. Just trust the AI.

On paper, this sounds like efficiency. In practice, it is something else entirely. Both platforms are simultaneously the marketplace, the auctioneer, and the scorekeeper. They are asking brands to hand over the steering wheel while also keeping score on whether the trip went well.

That is not efficiency. That is a conflict of interest dressed as innovation. And I think it is worth taking a hard look at what advertisers are actually giving up.

What Google and Meta are actually building

Let me be specific about what is happening, because the scale of this shift is easy to understate.

Google AI Max, rolled out through 2025 and now aggressively pushed across Search campaigns in 2026, does three things that fundamentally change how Search advertising works. First, it expands ad reach beyond the keywords advertisers specified — using broad match and what Google calls “keywordless technology” to find queries the advertiser never chose to target. Second, it dynamically rewrites ad copy — headlines, descriptions, everything — based on what Google’s AI thinks will perform best. Third, it selects the landing page on your own website that it believes matches the user’s intent, potentially overriding the page you designed for that campaign.

Early data is promising: Google reports 14% more conversions at similar cost per acquisition. Campaigns using exact and phrase match keywords saw gains up to 27%.

But here is what those numbers conceal. Advertisers no longer know precisely which queries triggered their ads, which version of their copy was shown, or which page users landed on. The performance might be better. You just cannot explain why — or replicate it independently.

Meta Advantage+ is heading somewhere even more radical. By the end of 2026, Meta aims for a fully automated advertising system where an advertiser provides a product image or URL and a budget. That is it. Meta’s AI generates the creative, identifies the audience, allocates the budget across Facebook and Instagram, and optimises in real time. The “Andromeda” retrieval engine processes trillions of ad signals to deliver what Meta says are more personalised ads with better return on spend.

The problem, again, is not performance. It is visibility. In 2026, marketers using Advantage+ report significantly reduced transparency into how their campaigns actually work. Meta’s AI processes so many signals simultaneously that it is functionally impossible for a human to understand why a campaign succeeded or failed. You get a result. You do not get an explanation.

Why “trust the AI” is not good enough

I have been covering the intersection of AI and advertising for months now, and a pattern keeps emerging: the platforms frame automation as a gift to advertisers, but the incentive structure tells a different story.

Google and Meta are advertising businesses. Their revenue comes from ad spend. When their AI systems make targeting, bidding, and placement decisions on your behalf, those decisions are optimised against their models — models trained on data you cannot see, using logic you cannot audit, pursuing objectives that may not perfectly align with yours.

Consider a few specifics:

  • Budget allocation opacity. When Google’s Performance Max or Meta’s Advantage+ distributes your budget across channels (Search, Display, YouTube, Facebook, Instagram, Audience Network), you receive limited breakdowns of which channels consumed what share. Some advertisers have reported finding their spend directed to low-quality placements across Google’s Display Network or Meta’s open web audience networks, where bot traffic and brand safety concerns are well documented.
  • Creative drift. When the AI generates and tests ad creative on your behalf, it optimises for engagement and conversion metrics. It does not optimise for brand consistency, tone of voice, or messaging alignment. A dynamically generated ad that converts well but misrepresents your brand positioning is a win for the platform’s algorithm and a problem for your brand.
  • Measurement circularity. The platform that serves the ad also measures its performance. This is the structural conflict that the Google ad tech monopoly ruling attempted to address. When the same entity controls delivery and measurement, independent verification becomes difficult. You are trusting the scorekeeper to grade their own exam.

The number that should concern every advertiser: according to a consumer survey by Colling Media, 69% of consumers say they feel manipulated when brands use AI in advertising without explicit disclosure. The EU AI Act’s Article 50 transparency obligations, which come into force in August 2026, will require advertisers to clearly identify AI-generated content. But if you do not know which version of your ad was AI-generated — because the platform created it autonomously — how do you comply?

The India problem: Accountability without control

This tension is particularly acute for Indian advertisers, and I have not seen any global outlet connect these dots.

India’s digital advertising market is expected to reach ₹69,856 crore by 2026, according to the dentsu-e4m Digital Report. Programmatic advertising — the automated buying and selling of ad inventory — will account for 44% of that total, growing at a CAGR of 21.24%. Digital media will contribute 61% of India’s total ad spend.

In other words, Indian advertisers are deeply dependent on the very platforms that are now stripping their control.

Here is where it gets worse. India’s Digital Personal Data Protection Act (DPDP Act) makes data fiduciaries — which includes advertisers, not just platforms — legally liable for how personal data is collected, processed, and used. Fines reach up to ₹250 crore per violation. The Act requires explicit, informed consent for data collection. It prohibits behavioural targeting of children under 18 entirely. And it demands that businesses restrict data collection to only what is necessary for the stated purpose.

The contradiction is obvious. Google and Meta are building systems where advertisers cannot see what data is being used for targeting, cannot control what audiences the AI selects, and cannot audit whether the targeting complies with local privacy law. But under the DPDP Act, the advertiser — not the platform — bears the legal liability.

Indian agencies and brands are caught in a vice. The platforms demand blind trust. The regulators demand full accountability. These two demands are incompatible, and almost nobody in the Indian advertising industry is talking about it, even as we covered the broader first-party data compliance challenge earlier this month.

Five things advertisers should do before H2 2026

The platforms are not going to reverse course. Full automation is where the money is — for them. But that does not mean advertisers should accept the terms uncritically. Here is the practical framework.

1. Build platform-independent measurement now

Stop relying on Google and Meta to tell you whether Google and Meta campaigns worked. Invest in independent attribution solutions — marketing mix modelling (MMM), incrementality testing, and third-party analytics — that give you a view of campaign performance that the platforms do not control.

This is not optional. It is the single most important investment an advertiser can make in 2026.

2. Demand audit rights in your platform contracts

If a platform’s AI is making creative, targeting, and bidding decisions on your behalf, you should have the contractual right to audit those decisions. This includes access to search query reports, placement reports, audience composition data, and creative version histories.

If your current contracts do not include these provisions, renegotiate. If the platform refuses, that tells you something about how much they trust their own AI to perform under scrutiny.

3. Maintain a manual campaign baseline

Do not migrate 100% of your budget to fully automated campaigns. Keep a portion of your spend in manually controlled campaigns — specific keywords, defined audiences, approved creative — so you have a performance baseline against which to measure the AI’s claims.

If AI Max or Advantage+ genuinely outperforms your manual campaigns, the data will prove it. If the platform cannot beat a human-managed campaign under controlled conditions, you have your answer.

4. Prepare for DPDP and EU AI Act compliance

For Indian advertisers: commission a data protection impact assessment (DPIA) that specifically examines your use of automated advertising platforms. Document what data you share with each platform, what consent mechanisms are in place, and what audit trail exists for AI-driven targeting decisions.

For advertisers operating in the EU: the AI Act’s Article 50 transparency requirements take effect in August 2026. You need a process for identifying and disclosing AI-generated creative within your campaigns. If your platform partner generates the creative autonomously, ensure your contract specifies who bears the disclosure obligation.

5. Diversify your platform mix

The duopoly is weakening. ChatGPT’s advertising platform is live with early data showing 1.5x conversion rates. Retail media networks are growing at 25%+ annually. Connected TV ad spend in India alone is projected to reach ₹8,000 crore in 2026.

Test aggressively. The advertisers who built diversified media mixes before Platform X became a problem (or before TikTok was banned in certain markets) were the ones who survived the disruption. The same logic applies here. Platforms that demand your trust while removing your controls are platforms that should receive a smaller share of your budget, not a larger one.

What this really means

The automation trend is not inherently bad. AI-driven advertising can genuinely reduce waste, improve targeting, and lower costs. The problem is not the technology. It is the governance.

When platforms control the creative, the targeting, the bidding, the placement, AND the measurement — while asking advertisers to simply “trust the AI” — the structure benefits the platform at the potential expense of the advertiser. This is especially dangerous in markets like India, where regulatory frameworks are placing accountability squarely on the advertiser while the platforms operate as black boxes.

The advertisers who thrive in the second half of 2026 will not be the ones who blindly adopted every automation feature. They will be the ones who adopted automation selectively, maintained independent measurement, demanded transparency, and kept enough manual control to know when the AI was genuinely performing — and when it was just spending their money.

Black box advertising is not a feature. It is a risk. Treat it accordingly.

Tags

Related Stories

Clap

Leave a comment

0/1000