# AEO Playbook: How to Optimize for AI w/ Profound’s Josh Blyskal

Tom Brewer
Table of Contents

These notes are based on the YouTube video by Content and Conversation


Key Takeaways

  • AI‑native search (AEO) is moving from “watch‑and‑wait” to “take‑action.” Marketers now need concrete tactics to win visibility on ChatGPT, Perplexity, Google AI Overviews, etc.
  • Profound’s Conversation Explorer provides one of the largest query‑volume datasets for answer‑engine searches. It aggregates tens of millions of queries per month from ChatGPT, Perplexity, Bing, and soon Google AI.
  • Generative queries dominate AI search. Roughly 50 % of all answer‑engine queries are purely generative (no clear commercial intent), while traditional informational, navigational, and transactional intents shrink to a small slice.
  • Click‑through rates and conversion quality differ wildly by platform. Perplexity yields 6‑10× higher CTR than ChatGPT, and conversion rates of 20‑30 % have been anecdotally reported for high‑intent traffic.
  • Content format matters more than ever. Listicles and comparative tables account for ~33 % of citations, while standard blogs are under 10 %. Structured, data‑rich content is favoured by LLMs.
  • Technical SEO signals still matter, but they need AI‑specific twists:
    • Year‑stamp URLs (e.g., …/best‑laptops‑2025) can boost citations by 20 %+ in ChatGPT.
    • Author schema has shown the strongest uplift among schema types.
    • lms.txt / lmsfull.txt files dramatically improve answer‑engine pickup (5‑10× on some platforms).
  • Freshness decays fast: most AI‑search citations drop to baseline after 1‑2 months; evergreen, problem‑oriented landing pages stay relevant longer.
  • Profound’s roadmap: content‑brief generation and in‑platform content creation to turn massive query data into actionable, optimized copy.
  • Pricing: a self‑serve tier is now available at $500 / month, making the platform more accessible to mid‑market brands.

Core Concepts

AEO vs. SEO

AspectSEOAEO (AI‑native Search)
Primary engineGoogle/Bing classic SERPLLM‑driven answer engines (ChatGPT, Perplexity, Google AI)
Intent distributionDominated by informational / transactional~50 % generative, “no‑intent” queries
Optimization musclesMeta tags, backlinks, site speedSame muscles (titles, schema, URLs) plus AI‑specific tricks (year‑stamp, semantic chunking)
Adoption curveNear‑saturationRapidly rising, still low‑to‑mid adoption

Takeaway: AEO is not a brand‑new discipline; it’s SEO repurposed for a different query distribution and retrieval model.

🔗 See Also: What Is AEO? How to Get Your Brand Found in AI Search

Generative vs. Traditional Intent

  • Generative queries – users ask the model to create something (e.g., “create a spreadsheet of employee hours”).
  • Traditional intent – classic informational, navigational, or transactional queries.
  • In AI search, generative queries outnumber traditional intents, forcing brands to think about “who will the model cite?” rather than just “who will click the link?”.

Retrieval‑Augmented Generation (RAG)

  • Current LLMs (e.g., ChatGPT) pull documents from an external index (Bing for OpenAI, Google’s index for Gemini).
  • If your page isn’t in Bing, it won’t surface in ChatGPT.
  • The RAG pipeline is built by academia; it’s not SEO‑aware, so brands must shape their content to be RAG‑friendly.

Tactical Recommendations

1. URL & Title Optimization

  • Add a future year token (2025) to URLs, title tags, and meta descriptions for a ~20 % citation lift in ChatGPT.
    /best-corporate-credit-cards-2025
  • Prefer URL placement over title‑tag placement for the biggest impact (blue‑ocean opportunity).

2. Cross‑Platform Publishing

  • Republish key posts on LinkedIn Pulse (or other authoritative platforms) to create multiple citations for the same content, increasing LLM confidence.

3. Semantic Chunking & Structured Answers

  • Write self‑contained, data‑driven paragraphs that answer a single long‑tail query.
  • Use HTML tables for comparative data (e.g., “Best laptops – 7 dimensions”).
  • Goal: give the answer engine a “ready‑made snippet” to pull.

4. Schema.org Enhancements

  • Author schema shows the strongest uplift for answer‑engine pickup (observed ~2‑3 % increase).
  • Standard schema (FAQ, Breadcrumb, Product) still valuable but less impactful than author markup in early tests.

5. lms.txt / lmsfull.txt Files

  • lms.txt – simple markdown list of key pages; add to robots.txt (even if redundant).
  • lmsfull.txt – full documentation dump for sites with extensive docs; yields 5‑10× more pickups on platforms like Cursor and Winsurf.
  • Example robots.txt entry:
    User-agent: *
    Allow: /lms.txt
    Allow: /lmsfull.txt

6. Freshness Management

  • Expect a 1‑2 month decay in citation volume after publication.
  • Schedule regular updates (e.g., quarterly refreshes) for high‑competition topics (CRMs, credit cards).
  • Evergreen CLPs (Commercial Landing Pages) – problem‑oriented pages that map each use‑case to a dedicated URL stay visible longer.

7. Content Type Prioritization

  • Listicles & comparison tables → dominate citations (≈33 %).
  • Blogs & opinion pieces → low citation share (<10 %).
  • Actionable micro‑apps (e.g., “generate a PDF invoice”) can win high‑intent clicks when paired with a well‑structured landing page.

8. Platform‑Specific Insights

  • Perplexity: higher CTR, lower traffic volume → high‑quality leads.
  • ChatGPT: massive scrape volume, lower CTR but still valuable for brand awareness.
  • Google AI Overviews (future): anticipate similar patterns; early adoption of the tactics above will pay off.

💡 Related: How to Build a Page AI Will Love (AEO Checklist)


Industry Insights & Outlook

  • Funding & Market Momentum: Profound closed a Series A led by Kleiner Perkins, NVIDIA Ventures, and Coastal Ventures (June 2025). No additional round has been announced as of February 2026.
  • LLM Evolution: Expect a split between lightweight language models (front‑end) and heavy knowledge bases (back‑end). Local LLMs on devices may become common, with search handled by a separate “knowledge engine.”
  • Economic Model: Major players (Google, OpenAI) are pushing toward token‑free search to lock in traffic, mirroring Amazon’s loss‑leader strategy.
  • Advertising in AI Search: Early experiments (e.g., Perplexity’s sponsored follow‑up queries) hint at a future where ad slots exist inside answer‑engine responses, potentially boosting CTRs dramatically.
  • Content Decay & Competition: In fast‑moving categories (headphones, CRMs) the “top‑citation” share can shift weekly; in niche topics the decay is slower.

Profound Product Roadmap (as of 2025‑06‑30)

FeatureStatusValue Proposition
Conversation ExplorerLiveQuery‑volume insights across ChatGPT, Perplexity, Bing, soon Google AI
Actionable Content Brief GeneratorIn betaTurns raw query data into ready‑to‑publish briefs directly in the platform
AI‑Driven Content CreationUpcomingGenerates optimized copy (including tables, schema markup) without leaving Profound
Enhanced Reporting DashboardPlannedVisualizes AI‑search traffic, CTR, conversion, and freshness decay curves
Self‑Serve TierLaunched ($500 / mo)Low‑barrier entry for SMBs and mid‑market brands

Summary

AI‑native search is reshaping how brands get discovered. While the underlying muscles—titles, URLs, schema—remain familiar, the distribution of intent and the retrieval mechanisms of LLMs demand new tactics:

  1. Leverage year‑stamp tokens in URLs and titles to capture generative query bias.
  2. Structure content for “semantic chunks” that provide concise, data‑rich answers ready for citation.
  3. Prioritize listicles, comparison tables, and micro‑apps that align with the dominant citation formats.
  4. Deploy lms.txt/lmsfull.txt to make your site crawlable for answer engines.
  5. Monitor freshness decay and keep evergreen, problem‑oriented landing pages up‑to‑date.

Profound’s Conversation Explorer and upcoming content‑generation tools give marketers a data‑backed roadmap to optimize for AEO without abandoning traditional SEO foundations. By adopting the tactics above, brands can secure high‑quality AI‑search traffic, improve conversion rates, and future‑proof their digital presence as the search landscape continues to evolve.

Tom Brewer Avatar

Thanks for reading my notes! Feel free to check out my other notes or contact me via the social links in the footer.

# Frequently Asked Questions

What is AI‑native search (AEO) and how does it differ from traditional SEO?

AEO focuses on getting your content cited by large‑language‑model answer engines such as ChatGPT, Perplexity, and Google AI Overviews, whereas SEO targets classic SERP rankings on Google or Bing. The query mix in AEO is roughly 50 % generative (no clear commercial intent), so the goal shifts from attracting clicks to becoming a trusted source that the model can quote. The same SEO fundamentals—titles, meta tags, backlinks—still matter, but they need AI‑specific tweaks like year‑stamp URLs and structured data.

How should I format URLs and titles to boost citations in AI‑driven search results?

Include a future‑year token (e.g., 2025) in both the URL slug and the title tag; studies from Profound show a 20 %+ lift in ChatGPT citations when the year is present. For example, use /best‑laptops‑2025 instead of just /best‑laptops, and mirror that phrasing in the <title> and <meta name="description">. This signals freshness to the LLM’s retrieval layer, which favors up‑to‑date content.

What technical schema and file‑based signals improve AI‑search pickup?

Author schema has proven to be the strongest schema lift for AI citations, so add a complete Person or Organization markup to every article. Additionally, create an lms.txt (or lmsfull.txt) file at the root of your site that lists high‑value URLs; platforms that read these files report a 5‑10× increase in answer‑engine visibility. Ensure both files are kept up‑to‑date and referenced in your robots.txt.

Why are listicles and comparative tables more effective for AI citations than standard blog posts?

LLMs prioritize structured, data‑rich content because it is easier to extract concise answers. Profound’s data shows that listicles and tables account for about one‑third of all citations, while traditional narrative blogs fall below 10 %. Designing pages with clear headings, bullet points, and side‑by‑side comparisons gives the model ready‑to‑use snippets, boosting both citation frequency and click‑through rates.

Is it true that AI‑search traffic loses relevance quickly, and how can I keep my content evergreen?

Yes—most AI citations decay to baseline after 1‑2 months because the models favor the newest indexed documents. To extend lifespan, build evergreen, problem‑oriented landing pages that address timeless questions, and periodically refresh them with updated data, new year‑stamp URLs, and refreshed schema. Pairing evergreen pages with a steady stream of fresh, year‑specific articles creates a balanced content ecosystem that sustains AI visibility.

Continue Reading