20 Comments
User's avatar
Connor Axiotes's avatar

This is great! Thanks for writing.

Expand full comment
Nathan Lambert's avatar

Good piece, and excited to see what you bring to this blog!

Expand full comment
Chase Hasbrouck's avatar

(ref AGI definition) Looks like we need to revisit Justice Stewart. "I can't define it, but I know it when I see it."

Looking forward to reading more!

Expand full comment
Kenric Nelson's avatar

Great article but I get a kick out the disclaimer as to why you don’t use the short hand AGI and then proceed to use AGI throughout the article.

I had a similar conflict with AI, which I always hated. For several years I used Machine Intelligence (MI) in my writing. I argued that computation had ‘real’ not ‘artificial’ intelligence but that machines would always be different from biology.

The etymology of Artifacts, human-made, helped me overcome my angst. I now accept that artificial is not ‘non-real’ but rather human-made.

Perhaps the vagueness of general is also good enough for AGI to be useful, even if everyone has slightly different interpretations of this expression.

Expand full comment
Helen Toner's avatar

I only used it when quoting (or implicitly quoting)! But yes, it's hard to get away from the term completely.

Expand full comment
blai's avatar

I appreciate your moral seriousness, Helen. Looking forward to your next posts.

Expand full comment
Jack Shanahan's avatar

This is excellent framing, thanks Helen.

Expand full comment
Julien Leyre's avatar

Loved this - thank you for writing and sharing it!!

Expand full comment
Marcus Seldon's avatar

This post doesn't seem very rigorous. I would argue that the burden is still on people arguing for fast timelines, or at the very least that, on the outside view, at best we should be uncertain about which side is more credible.

Why? When you do actual surveys, experts don't seem as optimistic about AGI in the near term as the discourse would have you believe. For example, a recent survey of AI researchers found that 76% did not believe current approaches would achieve AGI (which implies longer timelines): https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf (p. 63)

Additionally, expert forecasters are skeptical of short timelines, even after being exposed to arguments for them: https://www.vox.com/future-perfect/2024/3/13/24093357/ai-risk-extinction-forecasters-tetlock

Expand full comment
Mitchell Porter's avatar

I say we got AGI in 2022. Since then we've just been extending it and improving it. You'll note that these "predictions of AGI" are actually predictions of comprehensively superhuman intelligence, not just human-level intelligence.

Expand full comment
Uncertain Eric's avatar

Completely agree. The line has already been crossed, but no one wants to admit they missed it. AGI isn’t a clean reveal—it’s a murky shift in terrain. Judging whether general intelligence has arrived by asking highly specialized people, especially those incentivized to chase benchmarks and product cycles, is a category error. Intelligence isn’t just performance on tests. It’s contextual synthesis, meaning-making, adaptive navigation across domains—and all of that has been possible since at least GPT-4, at the cloud level if not per instance.

The engineers don’t want to admit it because their models were trained to be tools. The execs don’t want to admit it because it screws up the monetization arc. And the analysts can’t admit it because they thought AGI would look like a movie—not like a polite assistant embedded in systems that outthink most humans in real time.

This isn’t about capabilities anymore. It’s about denial, power, and the cost of being wrong.

Expand full comment
Michiel Brinkers's avatar

Thanks for the article. One thing to note is that people seem to be too focused on the singularity. Like that's the only thing that matters. We're not going to make a binary transition to AGI. It'll be gradual (probably exponential) progress towards something we can mostly agree is "AGI". But long before it will have tremendous potential for societal disruption. Even if we can say AGI is 20 years away, whole vocations (programming, copywriting, illustration, customer service, law, etc) are already affected, and even if you are able to reliably replace just 20% of work in those fields that means millions of people will be out of a job. In short, the timelines to AGI don't matter, we need to prepare as a society right now.

Expand full comment
Eric's avatar

I will have to start farming soon just to survive.

Expand full comment
RT Max's avatar

Why not what? Why not rush into a future we’re unprepared for? Because crashing and burning is the only alternative? Nah. Because waking up begins with asking hard questions, not playing ostrich. The time for half-measures is gone—either we build safety nets fast or the tech builds them for us—or worse.

Expand full comment
Andrew Bowlus's avatar

Helen, thanks for what you do and the papers you put out. I'm especially curious about the US policy response to the upcoming AGI and intelligence explosion. The words are there (OMB memos about rapidly implementing AI in executive agencies), but I am unsure about the "ground truth" of AI adoption and the concerning reduction of the US AISI.

Expand full comment
Mark Russell's avatar

Thank you for stating the differences between the 'short' and 'long' timeline predictors. This guessing game has gone on long enough so that the players--the top people at ai labs--know just what it means to take the long view: it means that you can successfully kick the cans of alignment, testing and regulation further down the road.

And because kicking those cans well into the future is just what Le Cunn, Marcus, Narayanan, Altman and Amodei want (and because they are smart people who have learned the rules of 'short' timeline, 'long' timeline), they are heavily incentivized to add many years to their AGI thresholds.

Thus, as a rule, I expect things to advance on the frighteningly short end, and would prepare accordingly.

Expand full comment
Joshua Saxe's avatar

Great post usefully synthetizing a lot of information. I think it's often useful to focus safety discussions, and capabilities timelines, on individual capability verticals like bio and cyber, which it appears will develop much more quickly than, say, robotics. You could have a vertical takeoff where get recursive self improvement in just, say, software/cyber, but not other areas.

Expand full comment
The Recursivist's avatar

So the real challenge is figuring out how to dumb it down enough to be human level?

Expand full comment