Field Note

Five mistakes business owners make trying to rank inside ChatGPT

Most of the AI-search work I see commissioned by small and mid-sized businesses is well-intentioned, mechanically reasonable, and unlikely to move the metric. Here are the five most common reasons why, and what to do instead.

Luke LaFave Founder · LaFave Consulting
5 min read

Most of the AI-search work I see commissioned by small and mid-sized businesses is well-intentioned, mechanically reasonable, and unlikely to move the metric. The reason isn’t usually a single bad decision. It’s a pattern of small misallocations that compound into a program that runs for nine months and produces almost no citation lift.

Here are the five I see most often, in order of how much they cost. The order matters — fixing the first one without the others doesn’t help.

The most common mistake. A business hires an “AI SEO” specialist who turns out to be a traditional SEO with a new business card. The work that gets done is keyword research, on-page optimization, backlink building — all of which are still useful, none of which are the work of getting recommended inside ChatGPT.

The tell is in the deliverable. If the monthly report shows rankings, impressions, click-through-rates, and not citation rate against named competitors across five AI platforms, the program isn’t measuring AI search. It’s measuring SEO. The work follows the measurement. If you don’t measure citation rate, you don’t get citation rate.

The fix is straightforward. Either insist that the audit measures the AI surface, or accept that what you’ve hired is SEO and stop calling it something else.

Two: starting with content before fixing structure

A business commissions a thirty-articles-a-month content program before its existing website has a schema layer, a coherent URL structure, or fast load times. Three months in, the content has been published, none of it ranks, and the team concludes “AI search doesn’t work in our category.”

The diagnosis is usually that the content was being published into a structural sinkhole. Pages were getting indexed by Google but not surfaced. AI crawlers were fetching them but couldn’t extract clean answers because the underlying HTML was unstructured and the schema was absent.

Content has to be published onto a foundation. The foundation is a fast website with valid schema on every page, llms.txt, semantic HTML, and a content architecture that lets new pages slot into a clear topic graph. If those things aren’t in place, content publishing is a leaky bucket.

The fix is to spend the first four to six weeks on foundation work — site rebuild or hardening, schema authoring, technical fixes — before starting the publishing cadence. The four to six weeks feels like a delay. It’s the difference between a content program that works and one that doesn’t.

Three: chasing brand-name prompts instead of category prompts

A common request from owners: “I want my brand to come up when someone asks ChatGPT about my brand”. This is reasonable, but it’s also nearly automatic — ChatGPT mentions almost any business by name when asked about that specific business. The bar to clear is much lower than the bar that matters.

The bar that matters is category prompts. Not “tell me about [Your Brand]”, but “who are the leading [your category] firms in [your region]”. The buyer who already knows your brand isn’t the buyer you need to win. The buyer who is asking the category question without knowing any brand is the buyer who decides the next two years of revenue.

If the audit prompt set is weighted heavily toward brand-name prompts, the program will look successful and produce no growth. The prompt set has to be heavily weighted toward category and comparison prompts, where the brand is competing for citation share against named competitors who are doing their own work.

Four: treating AI search as a marketing-only function

Cross-functional friction. AI search optimization touches website infrastructure (engineering), content production (marketing), entity surfaces (PR / partnerships), and reporting (analytics). When the program is owned exclusively by marketing — and walled off from engineering and PR — the work that requires those other functions doesn’t get done.

Specifically: schema and structural fixes require engineering. Entity work — getting the business onto Wikidata, building Crunchbase, getting cited in trade press — requires PR or partnership work. Without those functions cooperating, the program has only two of the four levers available, and the two it has are the slower-compounding ones.

The fix is organizational. The owner needs to designate someone who can coordinate across functions and has authority to ask for engineering time. In small businesses this is usually the founder. In larger ones it’s a VP of Marketing with sign-off authority. Either way, the program needs an actual owner, not a steering committee.

Five: stopping at month six because the metric hasn’t moved enough

The metric for the first three months is essentially flat. Month four shows the first movement. Month six shows a real shift on category prompts. Month nine, the program is producing inquiries that the business can attribute to AI-cited sources. Month twelve, the citation footprint is starting to compound and the business is a default answer in two or three category prompts.

This timeline is consistent across the brands I’ve worked with. It’s also long enough that many programs get killed at month six, when the spend has felt material but the lift hasn’t yet crossed a clear visible threshold. The owner looks at the dashboard, sees the citation rate moved from four percent to eleven percent, and decides “this isn’t working” — when in fact eleven percent at month six is a strong indicator that twenty-five percent at month twelve is on track.

The fix is to commit to an eighteen-month minimum program, with month-twelve as the first evaluation milestone, not month six. Programs that don’t survive to month twelve produce nothing, regardless of how well-executed they were in months one through six. The compounding kicks in late, not early, and the early kills are the most expensive mistakes in this whole category of work.

What to do instead

The shortest version: measure citation rate against named competitors, fix structure before chasing volume, target category prompts not brand-name prompts, coordinate across functions, and commit for at least four quarters before evaluating.

None of this is exotic. The whole program is just doing the right work in the right order, on a long enough timeline that the compounding becomes visible. The brands that do this are the brands that own the AI surface in their category by 2028. The brands that don’t, won’t.


Luke LaFave is the founder of LaFave Consulting. The studio works with four brands a month on the exact program described above, with the discipline to keep the work running through the compounding curve.

Tagged
  • Anti-Patterns
  • Strategy
Engage

If this piece resonated, the work is the next step.

The studio works with four brands per month. The discovery call is twenty minutes, includes a live audit of your current AI-search footprint, and you leave with a written plan whether you sign or not.

Become the answer
Call Luke (920) 505-0775 Text Luke Reply in minutes