You've seen this headline before.

"AI will automate software engineering in 6 months."

You saw it in March 2025. You saw it in October 2025. You're seeing it again in January 2026. Same CEO. Same company. Same timeline. The deadline just keeps moving forward like a horizon you're driving toward but never reaching.

This is the story of one prediction, repeated three times, and what the public response reveals about how we process hype.

image.png

The Timeline

March 10, 2025 (Council on Foreign Relations, New York)

Anthropic CEO Dario Amodei sits across from CFR President Michael Froman. When asked about AI's potential impact on programming, he offers this:

"I think we will be there in three to six months, where AI is writing 90 percent of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code."

Deadline implied: June-September 2025 for 90%. March 2026 for "essentially all."

October 2025 (Dreamforce Conference, San Francisco)

Six months later, Amodei sits across from Salesforce CEO Marc Benioff. When Benioff asks about that March prediction, Amodei responds:

"I made this prediction that, you know, in six months, 90% of code would be written by AI models. Some people think that prediction is wrong, but within Anthropic and within a number of companies that we work with, that is absolutely true now."

The claim has quietly narrowed. March's prediction applied to "coding, programming" broadly. October's verification applies specifically to "within Anthropic."

January 21, 2026 (World Economic Forum, Davos)

Ten months after the original prediction, Amodei appears on a panel called "The Day After AGI" alongside DeepMind CEO Demis Hassabis. His updated forecast:

"We might be 6 to 12 months away from when the model is doing most, maybe all of what software engineers do end-to-end."

New deadline: July 2026 to January 2027.

The prediction didn't fail. It just reset.


The Goalpost Problem

Here's where it gets interesting.

When Amodei claimed his March 2025 prediction had "absolutely" come true by October, independent analysis told a different story.

A detailed examination published on LessWrong found that the 90% figure required creative accounting:

"If you include code which was at all useful (including things like scripts which only get run once), the fraction written by AI at Anthropic is higher, probably closer to 90% than 50%, but this is hard to measure and exactly what you include might make a big difference."

The prediction was technically "met" by expanding what counts as code.

A one-off bash script? Code. A throwaway test file? Code. Auto-generated boilerplate? Code.

The LessWrong analysis concluded: "When Dario said 'this is absolutely true', I don't agree: I don't think the prediction came true."

This is how predictions survive contact with reality. Not by being right, but by being redefinable.


What the Industry Data Actually Shows

GitHub publishes detailed telemetry on Copilot usage. Their data shows:

  • Copilot generates about 46% of code in files where it's enabled

  • Developers accept roughly 30% of AI suggestions

  • 88% of accepted code is retained in final submissions

That's meaningful. But it's not 90% of all code being written by AI.

The gap between these numbers and the CEO predictions isn't small. It's the difference between "AI helps developers work faster" and "AI has replaced most of what developers do."

One is happening. The other keeps being six months away.


The Public Response Pattern

When the January 2026 prediction hit, it spread across tech communities worldwide. The reaction was immediate and predictable. Not because people are cynical, but because they've been here before.

The most common response wasn't technical rebuttal. It was pattern recognition.

People immediately connected the prediction to another recurring forecast: Tesla's Full Self-Driving timeline.

  • 2016: "Full self-driving in 2 years"

  • 2017: "Full self-driving in 2 years"

  • 2018: "Full self-driving in 2 years"

  • (continues through 2026)

The comparison has become shorthand for a certain type of tech prediction. One that's always imminent, never arriving.

Other comparisons surfaced: Amazon's drone delivery (announced 2013, still not at scale), blockchain replacing traditional banking (any year now since 2017), autonomous vehicles making human drivers obsolete (2015's "10-year" prediction now expired).


The Incentive Structure

Several observers identified what they see as the core dynamic.

AI CEOs have strong financial incentives to make aggressive predictions. Anthropic raised $2 billion in early 2025. Their valuation reportedly exceeds $60 billion. The company's ability to raise capital depends partly on the perceived imminence of transformative AI capabilities.

This doesn't mean the predictions are false. It means the timing isn't neutral.

A CEO predicting that their core technology will transform industries in the next 6-12 months is also a CEO justifying their current valuation and upcoming funding rounds. The predictions and the business model are intertwined.


The Split Nobody Talks About

The divide isn't believers vs. skeptics.

It's practitioners vs. observers.

A minority of developers (maybe 15-20% based on sentiment in tech forums) push back on the skepticism. They report genuine, significant productivity gains:

"The past 3 months have been the most monumental shift in the way I work as a software engineer in 15 years."

"At work I almost don't write any code at all and I'm delivering much, much faster."

These reports share a pattern: personal experience, specific tools mentioned, lower engagement than skeptical takes.

The skeptical responses get hundreds of upvotes. The "actually it's working for me" comments get single digits.

This isn't because the positive experiences are wrong. It's because confirmation of existing skepticism is more shareable than contradiction of it.

Both groups are correct about what they're observing. They're just observing different things.


Anthropic's Own Research

Here's an uncomfortable data point for the 90% narrative.

In late 2025, Anthropic's internal research team published a study examining how their own engineers use Claude. The findings, reported by Fortune:

  • Engineers reported using Claude for about 60% of their coding-related activities

  • More than half said they can "fully delegate" only 0-20% of their work to Claude

  • The most common uses were debugging and understanding existing code, not writing new features

  • Engineers handed Claude tasks that were "repetitive," "boring," and notably, "where code quality isn't critical"

The study found that without Claude, about 27% of work now being done simply wouldn't have happened before. Dashboards that weren't worth building manually. Small fixes that would have been ignored.

This is real productivity gain. But it's not "90% of code written by AI."

An Anthropic spokesperson noted the study had a small sample size. But it's their own data, from their own engineers, painting a more modest picture than the CEO's public statements.


The Comparison Nobody Made

One analysis contained the most useful framing:

"I think that it would be a case of Jevons paradox. As technology makes a resource more efficient to use, the total consumption of that resource can actually increase."

If AI makes code cheaper to produce, we might not write less code. We might write more.

The prediction assumes a fixed amount of software needed in the world. But demand for software has never been fixed. It expands to fill available capacity.

The 19th-century version: steam engines made coal more efficient, so we used less coal, right? No. We used vastly more coal because suddenly coal-powered machinery was economically viable for applications that weren't possible before.

AI might write 90% of code by 2027. And there might be 10x more code being written. And companies might need the same number of engineers.

Or more.

Nobody knows.


What's Actually Happening

After three rounds of this prediction:

What's verified:

  • AI coding tools have meaningfully improved since 2023

  • Adoption has increased significantly (84% of developers report using AI tools per Stack Overflow's 2025 survey)

  • Individual developers report productivity gains

  • GitHub Copilot generates ~46% of code where enabled, with ~30% acceptance rate

  • Some companies report high AI code contribution internally

What's not verified:

  • Industry-wide 90% AI code generation

  • Imminent full automation of software engineering

  • The specific 6-12 month timelines (which keep resetting)

What's contested:

  • Whether internal Anthropic metrics generalize to the industry

  • How to measure "AI-written code" consistently

  • Whether productivity gains translate to workforce reduction


The Real Question

The interesting question isn't whether AI will eventually write most code. It probably will.

The interesting question is why the timeline stays locked at 6-12 months regardless of when you ask.

In March 2025: 6 months to 90%, 12 months to "essentially all." In October 2025: Already happening (at Anthropic, with expanded definitions). In January 2026: 6-12 months to "most, maybe all."

The destination keeps arriving. The arrival date keeps moving.

This could mean the technology is developing in unexpected ways that keep pushing the goalpost forward. It could mean the original predictions were overconfident. It could mean the definition of "automation" keeps shifting to match whatever current capabilities exist.

Or it could mean that "6 months" is simply the optimal timeframe for a prediction. Close enough to feel urgent, far enough to avoid immediate falsification.


The Lesson

Pattern recognition isn't the same as cynicism.

People who've watched tech predictions for a decade have learned something: the predictions themselves are often accurate about direction while being systematically wrong about timing.

AI will transform software development. Autonomous vehicles will become mainstream. Voice assistants will become genuinely useful. These things are happening.

But "happening" and "6 months away" are different claims. The first is about trajectory. The second is about timeline. And tech leaders are much better at predicting trajectories than timelines, especially when their business model depends on urgency.

The reasonable response isn't to dismiss the technology. It's to discount the timeline.

See you in six months, when we'll be six months away again.

Respect
--