The half most agencies cannot see.
Where the work happens that compounds. How content gets written, how AI assistants learn to cite you, what we actually measure.
Is the content AI-written?
AI drafts, an editor finishes, you approve. The AI handles research and first draft. A human subject editor handles voice, accuracy, and the schema decisions that determine whether the page ranks. You see every page before it ships. Pure-AI content does not rank in 2026. Pure-human content does not scale to thirty pages a month. The program is built around the seam where both produce more than either could alone.
Who decides what topics get written?
Each month begins with a keyword set — built from your category, your three named competitors, and the long-tail patterns that AI search platforms reward. You approve the set before the month begins. We never publish on a topic you have not signed off on.
Will my brand actually get cited by ChatGPT?
By month three, on average. The work to get there is technical (schema, llms.txt, answer-engine markup), structural (the pages have to exist and rank), and entity-based (your brand has to be a recognized entity to the model). Programs that promise week-two results are measuring vanity numbers. The metrics that matter lag a quarter — and that is true for every brand in every category.
Which AI platforms do you optimize for?
ChatGPT, Claude, Perplexity, Google Gemini, and Google AI Overviews. We measure citation rate on the same hundred-and-twenty-prompt set every month across the five platforms, so progress is visible quarter over quarter against three named competitors.
How is this different from regular SEO?
Traditional SEO reads the list of blue links. We read the answer. Position one on Google still matters and we work for it — but forty percent of high-intent queries now resolve inside an AI answer before the SERP ever loads. The brands cited in those answers win the call. Most agencies cannot see this layer because their tools were built before generative search existed.