Most pieces about AI in advertising open by setting up two opposing camps. The framing is tidy. It is also wrong in ways that matter for any account spending real money. The camps do exist. One of them expects the technology to retire experienced PPC managers within a planning cycle or two. The other has spent the last two years refusing to use the technology at all, citing brand-safety concerns at the more thoughtful end of the argument and habit at the less thoughtful end.

Both positions miss what the work actually looks like now.

Watch what the platforms have shipped in the last twelve months. Watch which AI-led campaigns ended up in the trade press for the wrong reasons. A reasonably consistent pattern shows up across both. Accounts run by experienced PPC managers who use AI for research and analysis have done better than accounts run on full automation. They have also done better than accounts whose managers refused to use AI at all. Same humans, in other words, but with different tooling habits.

Platforms optimise toward the objective you have declared, and they do it efficiently. They are blind to the business context behind that objective. The system can tell you that conversion rates dropped 20% last month and rank that drop against expected variance. It cannot tell you whether the drop matters because you ran a price promotion, half the sales team was at a conference, or a competitor cut prices the week before. The judgement call lives outside the data the platform sees.

When AI Advertising Goes Wrong: Why McDonald's and Coca-Cola's Failures Matter

Two examples from 2023 keep getting pulled into the conversation, partly because they happened to two of the most recognisable brands in the world. McDonald's Netherlands ran an AI-generated Christmas campaign1 as part of its holiday push. Criticism started before the campaign had been live for a day. The visuals read as uncanny. The messaging read as tone-deaf. The brand pulled the video within the week, which is unusual for a flagship spot.

Coca-Cola's AI-made holiday spot pulled the same kind of fire a few weeks later. The word that travelled was soulless. Consumers used it. Creative professionals used it. fortune.com2 later reported a different problem with the same campaign: the spot had attributed a non-existent book to J.G. Ballard. That detail, more than the soulless complaints, is what carried the story around the industry.

Neither incident was a niche creative misstep. Both brands had to pivot messaging and distribution in the middle of their most expensive media period of the year, which is the part that gets under-rated when the story is retold. The systems that produced the content had been steered toward output volume and engagement metrics. The brand-value and cultural-nuance checks a human team would normally apply did not get their turn.

What this exposes is the failure mode of 'Set and forget' automation at the point context matters most. Models execute well inside parameters they have been given. They are not equipped to substitute for human judgement about whether something is commercially safe or on-brand, and the gap shows up most visibly in flagship campaigns where the cost of being wrong is highest.

Both incidents point to the same operating model. Automation moves execution faster. Humans hold responsibility for alignment, brand safety, and escalation, and that responsibility is not delegable. The same principle applies cleanly to PPC campaign management, which is where the rest of this piece goes.

Beyond Smart Bidding: The Complete AI PPC Management Toolkit

Coverage of AI in PPC tends to stop at platform automation: Smart Bidding, Performance Max, the things Google and Microsoft ship as defaults. That is one slice of the picture and probably not the most interesting one. The larger and less talked-about layer is decision-support AI, the tools managers use to diagnose problems faster and tie campaign performance back to commercial outcomes the platform itself cannot see.

AI for Performance Analysis and Anomaly Detection

GA4 has its own anomaly detection, and it catches performance shifts that manual analysis tends to miss for a depressingly long time. Pair it with BigQuery ML's AI.DETECT_ANOMALIES function3 and you can surface outliers by metric or by segment before a budget has drifted for a fortnight. The setup work is unglamorous. The payoff is that the dashboard tells you about a problem before the weekly review does.

Time-series libraries sit underneath all of this. LinkedIn's Greykite is the one most accounts reach for, partly because it handles seasonality patterns and changepoints at the scale of a real ad account without much fiddling. One habit that pays off in practice: before you move budget across channels in response to a dip, check the dip against the tracking change log. Half of what looks like a performance fall is a tag misfiring.

Search Term Intelligence and Waste Reduction

Clustering search term data is one place AI does work humans would have to grind through manually for hours, and where the time savings actually compound. Google's Search Terms Insights surface themes across Search and Performance Max campaigns at the level of what users were actually asking for, rather than the keyword you bought against. Account-level negative keyword lists then give you a way to enforce decisions across the portfolio without re-editing every campaign one at a time.

Thousands of search queries that would take a manager an afternoon to read through can be processed in a few minutes once the clustering pipeline is in place. The output is a list of proposed negatives and a separate list of new intent themes that probably deserve their own ad group. Neither list is automatically right. Both are a much better starting point than a blank screen.

The judgement layer matters here. Some of the suggested negatives will throttle valuable traffic if you accept them at face value. Some are easy money saved on spend that was never going to convert. Pulling the two apart is what makes this work, and it is the bit that does not automate.

Creative Optimisation and Business Outcome Connection

The AI applications that pay back the most are the ones that close the loop between campaign metrics and business outcomes the finance team would recognise. Enhanced Conversions and offline conversion imports do precisely that. They let you feed qualified-lead and revenue data back into Google Ads so the bidding algorithm learns from what actually generated money downstream, rather than from form completions that often have only a weak link to revenue.

Creative scoring is similar in shape. AI can rank ad variants by marginal ROAS and tell you which asset combinations are pulling the most downstream value. What it cannot tell you is which variant represents the brand the way the founder wants it represented, or which promise the sales team can actually deliver on without overrunning the implementation calendar. The distance between platform metrics and commercial reality is where campaign profit lives or dies.

What Happens When AI Systems Make Mistakes in Ad Targeting?

Targeting failures with AI in the loop tend to look a few different ways. Broad match paired with audience expansion is one common shape: spend bleeds into irrelevant or borderline-sensitive queries, and the placement report only shows the volume, not the brand cost. Performance Max can run too hot on display and video placements that don't convert anywhere downstream. Lookalike models will sometimes pull low-LTV cohorts into the account when the seed audience hasn't been vetted carefully enough.

Catching this early is a habits problem more than a tools problem. Review placement reports more often than feels strictly necessary. Watch conversion-rate deltas at the audience and geography level. The campaign-level dashboard, which is what most teams default to, hides most of the relevant signal. Miss the early warning and the cost varies. At the quiet end it is budget waste that lasts a few weeks. At the louder end it is a brand-reputation hit on a campaign the founders cared about.

Remediation, once you know what to look for, is mostly mechanical. Step one is account-level negative keywords and audience exclusions. That takes the obvious fires out. Step two is brand-safety controls and the manual disabling of automatic expansions, which together handle most of the remaining blast radius. After that comes verification. The verification layer is the part that matters most and the part that gets skipped most often. Run incremental-lift tests. Revise the bids and targets that need revising. Watch what happens to actual delivery on the orders or leads that result. The platform-reported numbers on their own miss most of what matters. The boring part of all of this is the regular audits of an artificial intelligence advertising campaign, and it is the part that prevents a small targeting error from compounding for a quarter before anyone notices.

Why PPC Managers Become More Valuable as AI Gets Smarter

A pyramidal hierarchy with a wide row of six dim AI module cards at the base (bid lines, search term tags, bar chart, audience circles, anomaly chart, performance lines), a middle row of three brighter aggregation panels (line chart, bar chart with bright peaks, donut chart), and a single electric blue human strategy card with a person icon at the apex.
Illustration. Growify Marketing.

The version of the future where AI quietly retires the PPC manager runs into one practical problem early. Algorithmic optimisation taken on its own terms does not replace a manager's job. It generates more work that only a manager can do.

Models optimise toward the objective you have declared, and they do it efficiently. Businesses operate on context the platform does not see and cannot infer.

Ask Google Ads for maximum conversions and it will give them to you. They will not necessarily be the right customers, and they will not necessarily come in at a sustainable cost. Google's own guidance on value-based bidding4 makes the same point in slightly nicer language. Feed richer signals into the optimiser, and set outcomes that map back to commercial reality, or the system will chase the wrong goal at speed.

What a skilled manager actually does in this setup is translate fuzzy business requirements into measurable objectives that account for margin and delivery capacity. The next part of the job is watching how the algorithm responds to those objectives in practice, then adjusting when the model's chosen direction starts drifting away from what the business needs.

Regulation is raising the bar on top of all of this. The EU AI Act's transparency requirements become applicable from 2 August 2026, with disclosure obligations for AI-generated content and synthetic-performer deepfakes built in. New York is moving on a similar timeline. cooley.com5 has a useful summary of the state Synthetic Performer disclosure law, which takes effect 9 June 2026 and applies to a broad set of advertisements produced with generative AI. The judgement calls about what to disclose and where to disclose it are the kind of thing a platform optimiser cannot help with.

Google AI Overviews have meaningfully changed how organic traffic flows on the queries they appear against. sistrix.com6 carried analysis showing organic click-through rates falling from roughly 15% to 8% on affected queries. That is a redistribution of traffic rather than a disappearance of it, but a redistribution that bites. Working through the implications on the paid side is more of a strategic planning exercise than an algorithmic adjustment, because the right move depends on the class of query and on the budget allocated to it, and the platform does not have a view on either.

As more of the routine execution moves to automation, the human role shifts up the stack. It becomes about interpreting model outputs and making trade-off decisions the model cannot evaluate. ppcsurvey.com7 captured this shift in the State of PPC survey, which has practitioners leaning more heavily on AI for routine tasks while saying their own work has become harder, mostly because platform transparency has gone the other way.

Building AI-Assisted PPC Workflows That Actually Work

A horizontal workflow. Three AI proposal cards stacked on the left (bid change with sparkline, search term tags with an add button, creative variant bars). A central human decision card with a person icon and three review rows: two with blue check marks, one rejected with an X. On the right, one large approved-action card with a big blue check, a rising performance chart, and a live dot. A dashed feedback loop arrow runs back from the execution result to the AI proposals.
Illustration. Growify Marketing.

The line that matters most when you sit down to design a workflow is the line between AI-assisted decision making and AI-automated decision making. The two get spoken about as if they were the same thing. They are not.

Full automation makes sense where the objective is unambiguous and the downside of being wrong is small. Human oversight is the design choice everywhere else, and particularly anywhere brand or legal context shapes what the right answer actually is.

In practice that looks like running recommendation engines and automated bidding models in the background while a human keeps the approval step on the higher-stakes decisions. Audience expansion goes through a human. Brand query coverage goes through a human. Anything that touches the names a customer types in when they have already decided to buy goes through a human. The platforms have started to take the visibility-and-control complaint seriously, which is what about.ads.microsoft.com8 signalled in its May 2026 post on Performance Max.

Move to the forecasting and reporting side of this and the working pattern looks similar. Pull the ad data into a warehouse first. Use proven open-source modelling libraries on top of it. BigQuery ML carries the anomaly detection layer, which removes the worst of the friction by putting the model where the data already lives. The interpretable forecasting layer sits on Greykite. The confidence intervals it produces are the kind a finance team can argue with, rather than the black-box point estimates most automated forecasts spit out. The last thing to do before anyone treats the outputs as ground truth is to check them against the promotional plan and against the team's actual delivery capacity.

The shape of the recipe is straightforward enough that it fits in one paragraph. Pull daily spend and conversions into BigQuery on a job that runs without anyone watching it. Fit a Greykite forecast on the resulting series. Generate the next eight weeks of budget scenarios with confidence intervals attached, and overlay the promotional calendar and any known capacity constraints on top of them. From there, pick a target CPA or ROAS band the business can defend, implement weekly pacing guardrails against that band, and retrain the model on a monthly cadence so drift does not quietly accumulate. The first time you go through the loop it takes a week or so. After that, it is largely a maintenance job.

The collaboration structure that works in practice is one where AI proposes and humans decide. Use the model layer to triage anomalies, cluster search terms, score creative assets against downstream value, and simulate budget scenarios under different demand assumptions. The output of that work is a queue of recommendations for the manager, not a queue of changes applied to the account. From there the manager reviews, aligns with sales and product on anything that touches commercial commitments, and only then implements. The final step most teams skip is importing offline conversions, which is what closes the loop and lets the system learn from outcomes the platform itself cannot see.

Consumer scepticism is the other constraint. gartner.com9 reported that 81% of consumers now actively try to avoid advertising. The number is high. It is also unsurprising the moment anyone in the industry looks at their own viewing habits. A separate piece in digiday.com10 has documented an AI backlash that is already moving brands to tone down AI-led messaging. The implication for an AI-assisted PPC stack is reasonably clear. Position the technology as a decision-support layer that sits underneath human control, set disclosure standards on anything that ends up customer-facing, and treat brand safety as a first-order design constraint instead of an audit that runs after a campaign has gone live.

The Commercial Case Against 'Set and Forget' Automation

Adoption statistics on their own miss the part of the story that determines whether AI use in an account actually performs.

Survey data carried in prnewswire.com11 earlier this year showed that 72% of marketers plan to expand AI usage in the next 12 months. The same survey showed that only 45% feel adequately prepared to implement what they are planning. The gap is one most people in the industry would recognise from their own teams. When automation outpaces the governance around it, the failure modes look familiar. Spend escapes through a placement type nobody knew was active. A regulatory disclosure gets missed. A forecast keeps reporting the green numbers from before the model started drifting, and nobody catches it for a while.

The State of PPC survey reads the same way from a different angle. Generative AI usage has risen sharply across most tasks the survey covers. The same respondents also report that their work has become more difficult, mostly because they have less direct control over what the platform is doing and less visibility into why. The pattern matters because it suggests successful AI implementation is mostly a process and oversight problem dressed up as a tooling one, and process problems do not get solved by buying better tools.

Brand safety adds the commercial urgency on top of all of this. The customers most likely to convert tend also to be the ones quickest to disengage when a brand mis-steps on transparency, which is why the trade-off lives at the centre of so many board-level conversations now. Decisions in this territory do not have a clean algorithmic answer. They need human judgement on brand risk and on whatever the relevant regulatory regime is.

The same digiday.com10 piece referenced earlier describes marketers actively pulling back on the AI-led parts of their messaging in response to consumer pushback. The shift is a useful data point. Governance and clarity have been treated as soft optional inputs to long-term brand value for years. On this evidence, they show up in the conversion numbers further down the funnel.

The role of AI in PPC, on the public evidence, is to make managers faster and better informed. It is not to make them redundant. The strongest results come from accounts where the model handles scale and pattern recognition, and humans set the objectives and validate the data quality the model is working from. That division of labour is what keeps profit margins intact. Without it, the account generates a lot of cheap clicks that do not convert to revenue, and a dashboard that reports the wrong story confidently.

Smart automation accelerates a strategy that already works.

It rarely fixes weak fundamentals, and on the public evidence so far it has not replaced strategic thinking in any account that is performing well.

Getting that distinction right is the part of the job that will separate PPC managers running their accounts well over the next few years from the ones whose dashboards quietly stop matching the business.

Sources

  1. 1.https://www.thedrum.com/news/the-most-terrible-ad-of-the-year-mcdonald-s-pulls-ai-ad
  2. 2.https://fortune.com/2025/05/14/ai-coca-cola-ad-campaign-invented-fake-book
  3. 3.https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-ai-detect-anomalies
  4. 4.https://support.google.com/google-ads/answer/15099424
  5. 5.https://www.cooley.com/news/insight/2026/2026-01-29-new-york-enacts-synthetic-performer-disclosure-law-for-advertisements-including-generative-ai
  6. 6.https://www.sistrix.com/blog/pew-research-ai-overviews-halve-click-through-rates/
  7. 7.https://www.ppcsurvey.com/the-state-of-ppc-2025
  8. 8.https://about.ads.microsoft.com/en/blog/post/may-2026/providing-more-transparency-for-your-performance-max-campaigns
  9. 9.https://www.gartner.com/en/newsroom/press-releases/2026-04-13-gartner-marketing-survey-finds-eighty-one-percent-of-consumers-tune-out-ads
  10. 10.https://digiday.com/marketing/with-ai-backlash-building-marketers-reconsider-their-approach
  11. 11.https://www.prnewswire.com/news-releases/only-6-of-marketers-have-fully-implemented-ai-according-to-new-supermetrics-report-302695477.html