The complaint about AI design is always the same. The output looks generic. The logos all rhyme. The websites feel like Squarespace templates with a fresh coat of paint. The conclusion most people land on: AI cannot do design.

The complaint is fair. The conclusion is the wrong one to draw from it. What is actually happening sits one level down, in the order the work gets done and the foundations it gets done on top of.

What AI is actually good at

A four by two grid of eight card variations. Each card has different content (image hero, badge, bar chart, avatar, big stat, check icon, line chart, segmented control) but all share identical DNA: the same notched corner, electric blue accent button at the bottom, same type hierarchy and border weight.
Illustration. Growify Marketing.

Give a model a clear, narrow brief, a set of tokens, and a library of components, and it will produce work that is fast, consistent, and on brand. At Hive and Growify we use AI tooling every day for exactly this. New marketing pages. New product screens. New variations of an existing card pattern. In our experience the first pass usually lands somewhere in the 80 to 90 percent range, and finishing it is days of design work rather than weeks.

The reason it works is unglamorous. The system has done the strategic work in advance. Colour palette: decided. Type scale: decided. Spacing rhythm, corner treatment, photographic register, voice: all decided. Inside those rails, the model has very little room to be wrong. It is being asked to interpolate, not to invent.

The Growify site you are reading this on is a worked example. The brand identity, the colour palette, the typography choices, the notched-corner shape system, the logo lockups: those came from human work. A designer made a series of opinionated calls. The calls were then encoded into code. Tokens in a TypeScript file. A logo component that refuses to render the wrong aspect ratio. A typography component that only exposes the canonical sizes. Off-canon hex values literally do not compile.

Inside that frame, AI does most of the production. A new article page, a new service page, a new variation of the homepage hero. The model can produce a viable first pass in minutes. We review it, push back on three things, ship it. The result reads like the rest of the site, because the rules of the site are encoded in the system the model is working inside.

What AI is bad at

Six near-identical AI generated logo cards in a row. Each has the same smooth gradient circle as the mark, a slightly different abstract symbol inside (triangle, diamond, ring, plus, diamond, eye-shape), and the same generic wordmark plus tagline placeholder lines underneath. The visible sameness across all six is the point.
Illustration. Growify Marketing.

The strategic work that defined the system in the first place.

A logo carries a compressed argument about what the company is. The argument has to come from somewhere. From a real understanding of the business, the category, the audience, and the founder's specific point of view. Models do not have that. They have an averaged read of a very large slice of the brand work that has ever been uploaded to the internet, and they will produce a competent, broadly likely mark that sits near the middle of that set.

Look at any AI logo generator output from the last year. The pattern is hard to miss. A smooth gradient orb. A geometric sans pairing in some shade of teal or navy. A vaguely organic curl, leaf, or wave element. They look like each other because they were drawn from the same training distribution. The same drift shows up in AI-generated colour palettes (navy and warm orange, sage and cream, that one purple every fintech now uses) and AI-generated type pairings (Inter and Playfair, on repeat). None of these are wrong. All of them are average. Average is the actual problem, because average is what fails to do the work that a brand identity exists to do.

Same mechanism, different surface, for brand voice. Same for category positioning. Same for the kind of taste call where nine of the ten generated options are technically correct and a senior designer rejects all of them anyway, because the right answer is in a place the brief never went.

The model cannot make that move. The distribution explains part of it, but the bigger gap is context. In a one-off chat with a generic tool, the model has nothing to work from except a paragraph a stranger typed at it. It does not know what the founder actually thinks. It does not know what the category is tired of seeing. It does not know, by default, what is already trademarked by a competitor down the road. It does not know what the audience has stopped noticing. A designer carries all of that into the work. A model carries none of it unless someone deliberately hands it over, and most of the founders generating logos in a chat box are not doing that.

Why founders end up with slop

A four stage pipeline. A tidy prompt card on the left, then an AI logo output, then an AI hero output rendered in a different visual style, ending in a chaotic composite card on the right (dashed border) where all three outputs are jammed together at clashing angles with inconsistent shapes, borders and proportions.
Illustration. Growify Marketing.

The slop pipeline goes like this. Founder reads a thread about how AI now does design. Founder opens a chat window. Founder types a paragraph describing their business. The model produces six logo options. Founder picks the one that looks least bad. Founder asks the model to extend it into a brand kit. The model produces a colour palette and a font pairing that are, again, the statistically likely choice for that prompt. Founder uses these to brief a development agency or another AI tool to build the site. Output: a website that is technically functional and emotionally generic.

At every step in that chain, the AI did roughly what it was asked. The problem was that nobody made a real strategic decision. The brief was thin, so the output was thin. There was no design system encoding any opinions, so the model defaulted to the most common shape in its training data. The work failed at the root, and everything downstream inherited the failure.

The founder then says: AI cannot design. The actual lesson is: AI cannot decide. Those are different problems with different fixes.

What this means for the way work gets done

Two horizontal lanes of five output tiles, separated by a time axis. The top lane (designer plus AI) shows five tiles that all share the same DNA: same notch, same accent colour, same proportions. The bottom lane (founder plus AI) starts the same but degrades from tile to tile: rounded corners replace notches, gradient circles appear, borders become inconsistent, and the final two tiles are tilted and clearly broken.
Illustration. Growify Marketing.

A good designer with AI tooling can put a site on the internet in days that would have taken weeks five years ago. The speed gain is real and material. It is also not the same product as a designer producing the same site without AI in the loop. The brief gets executed faster. The brief itself still has to be opinionated, and the opinions still have to come from a person.

A founder with AI tooling and no designer can also put a site on the internet in days. The site will look fine and feel familiar, in the way most generated work feels familiar. For some businesses, that is enough. The problem only shows up when the business needs the brand to do work. Distinguish in a crowded category. Signal something specific to a specific audience. Age well. Hold up against competitors who did spend on the strategic layer. The thin foundation cannot carry that load.

The two outcomes can look almost identical for the first few weeks after launch. Then they start to diverge, and they keep diverging.

The same shape shows up in AI-driven paid advertising too. Speed gain when an expert is in the loop. Noise when nobody is.

A practical take, for marketers

A literal wedge and lever metaphor. A solid electric blue triangular wedge (the design system) sits on a dashed ground line, filled with a dotted pattern at the top representing system tokens. A long thin neutral grey lever balances on the wedge apex. A small effort indicator hangs from the low left end of the lever, and a large bright design output card is lifted high on the right end.
Illustration. Growify Marketing.

If you run a marketing function and you are weighing where to spend, the line we draw is this. Pay for the strategic work that AI cannot do for you. Brand, positioning, voice, the small number of decisions that everything else is downstream of. Once those are made and written down properly, the production work behind them, the pages, articles, ads, variants, refreshes, becomes the kind of task where AI is now genuinely useful.

The system is the wedge. AI is the lever. Skipping the wedge is what makes the lever look broken.