Six months ago I noticed something weird in my analytics. A handful of articles that used to pull in steady search traffic started flatlining, but they were not losing rankings. The rankings were the same. The clicks were not. People were searching, finding the answers they needed inside ChatGPT or Perplexity or Google’s AI overview, and never clicking through to my site.
Then a different thing started happening. I would see referrals from chatgpt.com and perplexity.ai. Small numbers, but real. The people who did click through were warmer than search traffic had ever been. They had already read the answer and come specifically to read more, or to use the tool the article mentioned. The conversion rate was double the traffic from regular search.
The pattern was clear. Search behavior was bifurcating. The “give me an answer” queries were going to AI. The “I want to learn from a real person who has done this” clicks were still happening, just from inside AI answers instead of from a Google results page. The job was no longer “rank in Google.” The job was “get cited by the model the user is actually asking.”
That work has a name now. People are calling it generative engine optimization, or GEO, and the playbook for it is genuinely different from classical SEO. This is what I have learned about it over the last six months of running the experiment on my own content.
Why GEO Is Not Just SEO With A New Hat
The reflexive take is that GEO is the same as SEO with a different output format. Write good content, structure it well, build authority, get cited. The classical SEO tactics keep working.
That is half right. Many of the foundations carry over. But the differences are real and they are large enough that treating GEO as “SEO 2.0” leaves a lot on the table.
The first difference is who picks the citations. In Google search, the algorithm decides which pages rank, and you can study the algorithm by watching results move. In a generative engine, the model picks which sources to cite based on what it retrieved, what it judged relevant, and how it summarized the topic. The retrieval step is more opaque than ranking and the summarization step actively rewrites your content. You can be cited because the model liked one paragraph and ignored the rest of your article. You can also lose the citation because the model decided your information was outdated, even when it ranks #1 in Google.
The second difference is the unit of value. In SEO, the unit was the page that ranked. In GEO, the unit is the passage that gets cited. A 4,000-word article can be cited entirely on the strength of a single 80-word section that answers the precise question the user asked. The rest of the article almost does not matter for that citation, except as context for the model deciding whether to trust the passage.
The third difference is freshness. AI engines weigh recency more heavily than Google does for many topics. A two-year-old article that ranks well in Google can get passed over by an AI in favor of a six-month-old article with current numbers. This punishes evergreen content harder than SEO has historically and rewards content with explicit dates, current data points, and dated claims.
The fourth difference is the answer surface. In Google, you compete for clicks against ten other results. In an AI answer, you compete for inclusion against three to five citations and you are read by a fraction of users. Users who do click through are filtered, motivated, and ready to engage. Conversion rates from GEO traffic are noticeably better than classical SEO traffic. Traffic volume is lower. The math changes accordingly.
These differences shape what you actually do. The good news is that most of it is achievable by individual writers and small teams. The bad news is that most of the SEO advice from 2023 is now incomplete.
What The Models Actually Reward
I have spent enough time watching which of my articles get cited and which get ignored to have formed some opinions. Take these as field-tested heuristics rather than gospel.
Specific claims with explicit numbers and dates beat general advice. “Most APIs respond in under 200ms” is a claim a model will not cite because it cannot tell whether to trust it. “Stripe’s API responds in 87 to 145ms from us-east-1 in 2026 based on a week of measurements” is a claim a model can cite. The specificity is what makes it citable. The presence of a date makes it look fresh. The methodology line (“based on a week of measurements”) signals real first-hand work.
First-person experience with structured details is gold. “I built X, here is what happened, here are the numbers, here is what surprised me” content gets cited disproportionately because it is hard to fake and easy for models to recognize as primary source material. The same factual content written generically (without “I”, without timeline, without specific outcomes) gets cited far less.
Direct answers to the literal question being asked. AI users phrase queries as questions more often than Google users do. “How do I deploy a Next.js 16 app to Vercel” is the literal query. An article that has a section titled “How to deploy a Next.js 16 app to Vercel” with the steps right after will outperform an article called “Deployment Strategies for Modern Frontend Frameworks” that buries the same information.
Lists, tables, and numbered steps get pulled into answers. When an AI is summarizing for a user, structured content is easier to extract cleanly than dense prose. Articles with explicit “the five things you need to know are:” or “here is the comparison” get cited more often than articles that present the same information in flowing paragraphs.
Recency markers matter more than they used to. Putting “April 2026” in the article body, mentioning the current versions of tools, referring to recent events (“after Vercel announced X this month”) all signal currency. The signal is partly for ranking, partly for the model’s own judgment of whether the article is current enough to cite.
Source credibility cues that are robotic-readable. Author bios with credentials, links to source code, embedded benchmarks with reproducible methodology, all give the model evidence that the content is grounded. A model is more likely to cite content where it can verify the author has standing to make the claim.
Concise definitions in the first paragraph after each subheading. Models tend to grab the first sentence or two after a heading as the answer to the question that heading represents. If you bury the answer three paragraphs in, you are not optimizing for the citation pattern.
What gets ignored by AI is roughly the inverse: vague generalizations, unsourced claims, dense walls of text without structure, content that could have been written any year, and articles that prioritize keyword stuffing over readability. These were already weak SEO patterns. AI just punishes them harder.
Restructuring Existing Content For GEO
If you have a blog with existing content, you do not need to rewrite everything. You need to do targeted restructuring on the articles that should be performing better.
Pick the articles where the topic has clear AI search demand. Run the queries you would expect to drive traffic to those articles directly in ChatGPT, Perplexity, and Google’s AI Overview. See whether your article is cited. If not, look at what is cited and figure out what the cited articles do differently.
In most cases the difference is not the depth of expertise. It is the structure. The cited article has a clear “here is the answer” passage that the model could extract cleanly. Your article might have the same answer but spread across the piece instead of consolidated.
The remediation is usually:
Add a TLDR or summary at the top. A two to four sentence summary right under the title and intro, with the most important factual claims and an explicit current date. This is often the passage that gets cited in answer engines, even when the full article has more depth.
Make every H2 a question or direct claim. “Why X matters” is fine. “What is the cost of using X in 2026” is better. The headings should map to queries users type into AI search.
Lead each section with the answer. The first sentence after each heading should state the answer to that section’s question. Supporting evidence and nuance follow. This inverts the academic essay structure (build to the conclusion) in favor of the journalism inverted pyramid (conclusion first).
Insert specific data points. Replace “many”, “most”, “often” with actual numbers wherever you have evidence to back them up. If you have not done the measurement, do it once and put the number in.
Add a publication date and a “last updated” marker. Both should be visible in the body of the article, not just the metadata. Models read the visible content more reliably than they read structured data.
Cross-link to related work, including your own. Internal links and citations to other primary sources help models build a graph of trust around your content. They also help when a user clicks through and looks for more depth.
This is a pass that takes maybe an hour per article. On the articles where it actually fits, the impact in AI citations over the following six to twelve weeks has been the most cost-effective writing work I have done all year.
What Your Site’s Technical Setup Should Do
A surprising amount of GEO is technical SEO that turns out to matter again.
Your content should be reachable to crawlers. AI engines use a mix of their own crawlers (OAI-SearchBot, PerplexityBot, ClaudeBot) and indexed-from-Google content. If you are blocking AI crawlers in robots.txt, you are also blocking AI citations. The conversation about whether to allow AI crawlers is its own thing, but if your goal is GEO traffic, the answer is “let them in.”
Your structured data should be clean. Article schema with author, datePublished, dateModified, and a clear headline gives the model unambiguous metadata. Most static site generators handle this for you. Most CMSes do not, by default.
Your page should render the content on first load. Heavy JavaScript-rendered content where the actual article body is hydrated client-side is less reliably indexed by AI crawlers. Static rendering or server-side rendering wins. If you are using a framework like Astro, Next.js with proper static generation, or any plain static site, you are fine. If your blog is a SPA that fetches the article from an API, you have a problem.
Your URLs should be stable and meaningful. AI engines cite URLs and the URL becomes part of the answer that users see. Slug churn (changing URLs after publication) hurts both classical SEO and GEO citations. Keep URLs once you publish.
Your sitemap should be clean and current. AI crawlers respect sitemaps for crawl prioritization. An out-of-date sitemap means new articles take longer to be eligible for citation.
Your speed should be reasonable. The same Core Web Vitals story that mattered for Google still matters here, partly because the underlying crawlers are still rendering pages, partly because slow sites are sometimes deprioritized.
None of this is novel. The interesting part is that the technical SEO foundation that some teams have neglected for years is now load-bearing again, because AI engines are pulling from it the same way Google does, just with different downstream behavior.
Measuring GEO, Which Is Annoying
Classical SEO has Search Console. GEO has, well, much less.
A few things you can measure directly:
Referrals from AI engines. chatgpt.com, perplexity.ai, gemini.google.com, claude.ai (when users click through), and any “AI overview” landings show up in your analytics if you look for them. They are usually filed under “direct” or “other” because the referrer headers are inconsistent. A custom segment that buckets these together gives you a baseline.
Direct citations in answers. Manually run your target queries in the major engines once a month and check whether your articles are cited. This is tedious and not perfectly representative because answers vary by user, but it gives you a directional read.
Brand search lift. As your articles get cited, more people search for your brand or domain by name. Brand search volume in Search Console (or any SEO tool) is a proxy for AI exposure that you cannot otherwise measure.
Quality of conversions from “direct” traffic. GEO traffic looks like direct traffic in many analytics setups because users land on the article from a copy-paste of the URL or from the answer engine without a referrer. Watching the conversion rate of “direct” traffic to deep-funnel events (signup, purchase) catches GEO impact even when the source is opaque.
A few tools have started building proper GEO analytics with crawler-style monitoring and citation tracking. They are still early but worth watching. For most indie hackers, the manual monthly check plus referral tracking is enough to know whether your content strategy is working.
The harder measurement question is attribution. If a user reads about your tool inside a ChatGPT answer, then later searches your brand and signs up, was that a GEO conversion? Probably yes, but your last-click attribution is going to call it a brand search conversion. The right move is to track total volume of cited articles and watch downstream brand search and direct traffic together, rather than expecting clean attribution per citation.
What Indie Hackers Should Actually Do
Most of what I have written about SEO for indie hackers still applies. Write specific, primary-source content about the work you are actually doing. Be clear, be honest, be useful. The basics still drive results.
For the GEO layer specifically, the work that has paid off for me, in order of impact:
Restructure your top ten articles for citation extractability. Add summaries, lead with answers, insert specific numbers, mark recency. This is the highest-leverage work you can do because it improves articles that already have authority and traffic.
Publish work that is hard to fake. First-person experiments, actual benchmarks, real revenue numbers, screenshots of real interfaces. AI engines lean toward primary sources, and primary sources are something a small operator can produce more easily than a content farm can fake.
Get cited by other primary sources. Real backlinks from people who actually built things, who run real businesses, who write from experience. The graph of citations between primary sources is the substrate AI engines extract from. Being part of that graph is more valuable than being part of the SEO link economy.
Use dates and current versions everywhere. “In 2026,” “as of April,” “version 16.2” all signal currency. They also help future-you when you re-read the article and need to know if it is stale.
Stop chasing keyword volume as the primary metric. A 2,000 search-per-month query that gets answered entirely by an AI is worth less to you than a 200 search-per-month query where the AI actually cites you. Optimize for the queries where the citation pattern works in your favor.
Build a brand people search by name. This is the slow long-term work. Newsletter, social presence, real product, consistent voice. The brand search lift is the moat AI cannot fully eat, because users typing your name into Google or ChatGPT are looking for you specifically. Investing in building a personal brand on X and through your own developer newsletter compounds in a way that rented audience never does.
Let the AI engines crawl you. Until the conversation about content licensing and crawler ethics shakes out further, blocking AI crawlers means giving up GEO traffic. For most indie hackers, the traffic is worth more than the principled stance.
What I Am Watching For Next
A few open questions about where this goes.
The major engines are increasingly adding shopping, agentic actions, and direct task completion inside their answers. As that grows, the user might not need to click through at all even on commercial queries. That changes the math for product-led content and pushes more value into the brand-as-moat strategy.
Personalization in AI search is creeping in. The same query from different users may already be returning different cited sources. That makes “rank tracking” a less coherent activity than it used to be and pushes toward broad, durable content strategies rather than per-query optimization.
The legal and economic frameworks for AI training and citation are still in flux. Whatever shakes out is likely to change the calculus for whether to allow crawlers and how citations are credited. Worth watching, not worth waiting on.
For now, the playbook is straightforward enough. Write specific, recent, primary-source content. Structure it for citation extraction. Make it easy to crawl. Put your name on it. Watch the referral traffic and the brand search lift. Iterate.
The traffic patterns from search are not going back to where they were in 2022. The tradeoff is that the traffic that does come through is better. Smaller numbers, warmer users, higher conversion. For indie hackers and small teams, that is a fair trade. The work to capture it is mostly within reach. The biggest mistake is to keep writing the way SEO required ten years ago and assume the citations will follow. They will not. The new game is being played whether you optimize for it or not.