How AI-Generated Content Performs in Search: A Data-Driven Analysis

AI-generated content has rapidly moved from a novelty to a core part of modern content strategies. Companies now use artificial intelligence to produce blog posts, landing pages, documentation, and entire content hubs at scale. The promise is compelling: faster production, lower costs, and broader topic coverage than ever before.
Yet one fundamental question continues to divide marketers and SEO professionals: does AI-generated content actually perform in Google Search over the long term, or is it simply a short-term growth hack that fades once algorithms catch up?
The data now exists to answer this properly. Multiple independent studies, involving hundreds of thousands of pages analyzed across real SERPs, have produced findings that are both surprising and instructive.
What the Numbers Actually Show
AI Content Can Rank — But Rarely Dominates
The most comprehensive study to date comes from Ahrefs, which analyzed 600,000 web pages to understand Google's relationship with AI content. Their finding was striking: 86.5% of content appearing in Google's top 20 results contains some amount of AI-generated text. Only 13.5% is entirely human-written.
This sounds like a green light for AI content — until you look closer. Purely AI-generated content (with no human editing) makes up only 4.6% of top-ranking pages. The vast majority of high-performing content is hybrid: AI-assisted, but refined by human expertise.
Semrush reached similar conclusions through their own study of 20,000 URLs. They found that 57% of AI articles and 58% of human articles appeared in Google's top 10. The gap is negligible — but the critical qualifier is that the AI content in their sample was edited and reviewed by subject matter experts before publication. When Semrush surveyed over 700 marketers, 73% reported using a combination of AI and human writing rather than publishing raw AI output.
The Pure-AI Cliff
Reboot Online ran a controlled SEO experiment specifically designed to isolate the performance of pure AI content versus human-written content, eliminating external ranking factors like backlinks and domain age. Their conclusion: with all else being equal, human-written content consistently outranked AI-generated content.
A separate 2025 analysis by Writesonic, covering 500 AI-assisted articles across multiple niches, found that hybrid pages — those combining AI drafts with human editing — ranked 34% higher on average than unedited AI content. Bounce rates were also lower, signaling better user engagement and stronger behavioral signals.
Rankability studied 487 Google search results using an AI content detector and found that 83% of top results used human-generated content over AI. They also documented a direct case of deindexation: a page built entirely with unedited AI content for the keyword "SEO training Houston" was removed from Google's index following a spam update. After replacement with human-written content, it was reindexed and returned to the top 10.
The Graphite Research on Web-Wide Trends
A 2025 study by SEO firm Graphite, analyzing 65,000 URLs published between 2020 and 2025, provided context on the broader web. AI-generated articles briefly outnumbered human-written articles in November 2024 — but search performance told a different story. Graphite found that 86% of articles ranking in Google Search were written by humans, and 14% were AI-generated. The same pattern held in ChatGPT and Perplexity: 82% of cited articles were human-written, and 18% were AI-generated. When AI-generated articles did appear in results, they ranked lower on average than human-written articles.
The Early Visibility Illusion
One of the most consistent patterns in AI content performance is what might be called the early visibility illusion. New pages — even on relatively fresh domains — can be indexed and begin appearing for a range of queries within weeks of publication. Impressions increase, keyword coverage expands, and some pages temporarily reach the first page of results.
This initial traction often feels like validation. It is not.
When new content is published, search engines do not immediately assign a permanent ranking. Instead, they enter a testing phase, exposing the content to users across different query contexts and measuring behavioral signals: click-through rate, time on page, scroll depth, return-to-search rate. This evaluation period typically lasts several weeks to a few months.
Pages that fail to generate positive engagement signals begin declining after this window closes. AI content that is structurally correct but lacks depth, originality, or genuine usefulness tends to lose rankings at this stage — not because it was AI-generated, but because users did not find it satisfying.
What Google's Algorithm Is Actually Measuring
Google has stated repeatedly that it does not penalize content based on how it was produced. What it penalizes is content that fails to demonstrate experience, expertise, authoritativeness, and trustworthiness — the E-E-A-T framework. The addition of "Experience" to this framework in 2022 was significant: it introduced a signal that purely AI-generated content structurally cannot provide. First-hand experience, personal insight, and real-world perspective are things algorithms look for and that AI alone cannot authentically generate.
The March 2024 core update reinforced this direction sharply. Many websites publishing large volumes of unedited AI content saw dramatic drops or complete deindexation. The pattern was consistent: sites relying on AI for quantity, without human oversight for quality, were the primary targets.
The Performance Gap in AI Search Environments
The picture becomes even more nuanced when you look beyond traditional Google rankings and consider how content performs in AI-generated answers — the environment that is reshaping search in 2026.
According to Ahrefs data, AI Overviews are more likely to cite AI-generated content than purely human-written content. This might seem contradictory — but it reflects a key structural insight. AI Overviews favor content that is clear, structured, and extractable, which well-edited AI-assisted content tends to be. The citation advantage goes not to pure AI content or pure human content, but to content that is optimized for clarity and semantic structure regardless of its origin.
Seer Interactive found that 85% of AI Overview citations were published within the last two years, and 44% were from 2025 alone. Content freshness is a major factor in AI citation, which creates an ongoing obligation to update and refresh rather than publish and forget.
The traffic dimension adds further complexity. According to WebFX, generative AI traffic is growing 165x faster than organic search traffic. However, as of mid-2025, AI platforms still generate only about 0.1% of total web traffic — with Google sending 345 times more traffic than ChatGPT, Gemini, and Perplexity combined. The opportunity is real but still emerging.
What High-Performing AI-Assisted Content Looks Like
The data converges on a clear profile for content that performs well whether measured by traditional rankings, AI citations, or user engagement.
Hybrid creation process. The most consistent finding across every major study is that AI-assisted content reviewed and enhanced by human experts performs dramatically better than unedited AI output. Ahrefs found that websites using AI content grow 5% faster than those that do not — but only when the content goes through a meaningful editorial process. Of the marketers surveyed, 97% confirmed they do not publish pure AI content without review.
Structural clarity. Research from Chris Green and Seer Interactive found that structured content using headings and lists outperforms dense paragraphs, particularly for non-question queries. AI Overviews favor this format. So does user engagement. A page that is easy to scan is a page that gets read and cited.
Original contribution. The strongest differentiation for AI-assisted content comes from what the human adds: proprietary data, case studies, personal experience, or expert analysis that cannot be found elsewhere. This original layer is what separates content that sustains rankings from content that peaks and fades.
Topical depth. Pages that cover a subject comprehensively — addressing multiple related questions and subtopics — consistently outperform narrow articles. Search engines reward thoroughness, and AI can genuinely help achieve it when guided correctly.
Content freshness. AI search platforms prefer to cite content that is 25.7% fresher than content cited in traditional organic results (Ahrefs). Keeping key pages updated is not optional — it is a competitive requirement.
The Cost Reality
One reason AI content strategies remain attractive despite the quality challenges is the cost structure. According to Ahrefs research, the average cost per AI-generated blog post is $131, compared to $611 for a human-written post. Marketers using AI publish 42% more content — a median of 17 articles per month versus 12 for those not using AI.
The economic case for AI-assisted content is real. The risk is in misinterpreting speed and volume as a substitute for quality. Teams that use AI to produce more content of the same caliber consistently outperform those that use it to produce more content with less oversight.
Practical Conclusions for Content Teams
The evidence points to several clear strategic principles.
AI content can rank, but the ceiling for unedited AI output is low. Pure AI pages rarely reach position one in competitive searches and face ongoing exposure to algorithm updates targeting low-effort content.
The hybrid approach — using AI for drafting, research, and structural efficiency, combined with human expertise for insight, accuracy, and originality — consistently produces better performance across both traditional and AI search environments.
Publishing frequency matters, but not at the cost of quality. The 42% increase in publishing frequency that AI enables is an advantage only when it does not come with a proportional drop in depth or originality.
AI citations and traditional rankings are increasingly distinct ecosystems. Content optimized for one does not automatically perform in the other. A complete content strategy in 2026 must account for both.
Freshness is no longer a nice-to-have. With 85% of AI Overview citations coming from content published in the past two years, regular content updates are a structural requirement for sustained visibility in AI-driven search.
Final Thoughts
The question of whether AI-generated content performs in search has a clear answer by 2026: it depends entirely on how it is used.
Mass-produced, unedited AI content generates short-term impressions and long-term problems. Content that combines AI efficiency with genuine human expertise, clear structure, and regular updates performs as well as or better than traditional content — and at significantly lower cost.
The future of content strategy is not AI versus human. It is AI and human, working together with a clear understanding of what each does best. Search engines are not evaluating how content is created. They are evaluating how useful it is. That has not changed — and it will not.

Catalin Dinca
Written by Catalin Dinca