The term “AGI” is almost useless at this point
We’ve entered the fuzzy cloud of “AGI-ish”—now we need more specific and ambitious milestones
People often treat the term “artificial general intelligence” (AGI) as if it means a single clear thing. But when pressed for a clear definition, their answers—though they may appear superficially similar—tend to differ a lot in the details.1 Is AGI about matching human performance, or exceeding it? Is it about all tasks, or just cognitive ones—and in either case, about most or all of those tasks? Is the important part that AI can perform economically valuable work, or that it can learn new things as efficiently as humans?
It used to seem possible that, in practice, the differences between these definitions might not matter all that much. If AI capabilities progressed smoothly—say, from mouse- to chimp- to human-level, or from preschooler to high schooler to PhD graduate—then all of the above definitions might be fulfilled at roughly the same time.
These days, though, it’s clear that AI capabilities are highly jagged. This makes it likely that there could be big gaps between when different definitions of AGI are fulfilled.
To put it another way: many people seem to treat AGI as a “know-it-when-you-see-it” kind of thing.2 But we now simultaneously have serious people claiming that AGI has been achieved (in April 2025, December 2025, February 2026) and others saying it’s still a decade or more away. Expecting that we’ll know it when we see it is patently not working.

I don’t want to just hate on the idea of AGI here—it had a good run, and it was more useful in the past. The problem is that AGI is a “fuzzy cloud” of a concept. It’s not one specific point in concept-space, but a spread of many related points:
Fuzzy clouds from a distance vs. up close
It’s not inherently bad for a concept to be fuzzy - many useful concepts are. Love, life, justice, freedom… As long as invoking the concept lets you gesture in a direction that your listener understands, fuzziness doesn’t have to be a big problem.
For a long time, this was the situation with AGI: we were far enough away from the fuzzy cloud that the specifics didn’t really matter. Back when the term was coined in the early 2000s, the point of talking about artificial “general” intelligence was to contrast with the “narrow” AI systems that existed at the time. “Narrow AI” refers to models that can each do one specific thing, like recognizing zip codes, detecting credit card fraud, or filtering spam. Back when the AI that we could build in reality was far from any definition of AGI, it was helpful to be able to gesture in the direction of much more capable, general-purpose systems that we might one day develop:
But that’s changed. Today’s best AI systems are good enough that they’re now inside the fuzzy conceptual cloud of “AGI-ish”: that is, they’ve surpassed some people’s definitions of AGI, while falling well short of others’. As a result, talking about “AGI” is no longer a helpful way to gesture in a rough direction—instead, it’s likely to make some people think you mean one thing, and others imagine something totally different:
If you ask me, “AGI” stopped being a useful term as early as 2022. The original ChatGPT, released that November, was not human level in most ways… but it was clearly “general” in some meaningful way that had not been true of AI systems before the advent of large language models. Rather than ditch “AGI” in 2022, though, instead we just coined a new term, “general-purpose artificial intelligence,” which is obviously a totally different thing from artificial general intelligence.
In 2026, it’s time to move on.
“AGI” alternatives
In my experience, there are two big use cases for a term like AGI:
Predicting a phase change: “Sure, things are a certain way now, but once we have AGI, things will be a whole other way…”
Predicting we’re not close to the ceiling of AI progress: “AI has gotten a lot better over the past few years, but there’s still a long way to go…”
If you want to talk about a phase change, I think the best approach is to be as specific as you can about what milestone you think will trigger the phase change and why. Some options:
Fully automated AI R&D (which could lead to an intelligence explosion)
AI that is as adaptable as humans (which could lead to massive, irrecoverable job loss)
Self-sufficient AI (which would no longer need humans for energy, chips, or anything else, and could therefore dispense with them)
AI becoming conscious or otherwise worthy of moral status (which should radically change how we interact with AI)
If you want to talk about a high ceiling, one good option is just to describe that: “AI that’s far more capable than what we have today.” If you want a single term, the simplest option is “superintelligence,” which roughly means AI that is far smarter than humans in most or all ways. Like AGI, superintelligence is a fuzzy cloud of a concept, but it’s still far enough away from the AI we have today that it can be a helpful direction to gesture in, just like AGI was 20 years ago.3
As Ajeya Cotra points out, though:
> Bloodless phrases like “cognitive tasks” and “virtually all” make people’s eyes glaze over, and it’s very easy for different people to interpret them in massively different ways.
So for people who think the world is going to be radically transformed by advanced AI, I think it’s helpful to talk less about either AGI or superintelligence, and instead describe vivid, concrete milestones that you think will happen soon due to increasingly capable AI. The classic of the genre is “all the humans are dead,“ but other options include things like “civilization’s energy output is 10x-ing every year” or “near-light-speed space probes are on their way to other stars.”4
While I do love being pedantic about terminology, I didn’t write this post just to gripe.
The gaps between different conceptions of what “AGI” even means are starting to damage our ability to think together about how to navigate the transition to a world with extremely advanced AI. A word that lets three people think they’re talking about the same thing—when one of them means “o3 with tool use” another means “AI that could run civilization autonomously without humans,” and the third means “an AI that works exactly like the human brain, consciousness and all”—is an actively anti-helpful word.
I’m not expecting AGI to vanish from common usage overnight. For one thing, vagueness isn’t always a problem—for a term like “AGI-pilled,” it’s more of a feature than a bug.5 But I think it’s very likely that 10 years from now, talking about when we’ll build AGI or what will happen when we reach AGI will feel outdated and misguided, because it will be clear that AGI was always a fuzzy cloud - not a specific milestone. I hope this post can help us reach that conclusion sooner.
A very incomplete list of such definitions:
“AI systems that are generally smarter than humans” (OpenAI)
“Highly autonomous systems that outperform humans at most economically valuable work” (OpenAI, elsewhere)
“A system that can efficiently acquire new skills outside of its training data” (François Chollet)
“AI that’s at least as capable as humans at most cognitive tasks” (Google DeepMind)
“A system that can exhibit all the cognitive capabilities humans can” [Demis Hassabis]
“A software program that can solve a variety of problems in a variety of different domains, and that controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and dispositions” (Pennachin and Goertzel, p1)
Jasmine Sun’s https://define-agi.com/ also has a nice set.
My favorite AGI definition, from Joe Carlsmith: “you know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”
That is, it’s far enough away in concept-space—who knows how far away it is in time.
People often try to use economic metrics to do this, but my understanding is that some of the people who have thought most about this believe we may see huge transformations that don’t necessarily show up in crazy-seeming economic metrics. So I find other indicators (like energy use) more helpful. h/t to Ajeya Cotra for inspiring the two examples in the post.
AGI-pilled, adj.: “thinks AI will get really good really soon and is a really huge deal, no, huger than that, no, really”





I was thinking along similar lines just this weekend. It seems that the terms AGI and ASI have become increasingly meaningless, where I remember them having pretty standardised and straightforward definitions a decade or so ago. I would have said AGI meant a system that can perform as well as or better than the average human at all tasks humans can do, while ASI meant a system that could perform all tasks humanity can do as well as or better than.
My take though was that we've been seeing a lot of fuzzyness, semantic drift etc from motivated bad actors. There are plenty of people who see a precise definition of AGI or ASI that roughly speaking means a very hard and very impressive thing. For a lot of people, claiming we've already met it, and messing around with the definitions if needed to do so is just something they find useful no matter what damage it does to more precise technical language.
This has been a really useful post for me as a re-calibration towards the fuzzyness in the term itself, and jaggedness as a catalyst for fragmenting definitions. I do still think I see a lot of motivated bad definitions though, and I don't share your hope on ASI still being a useful term, since I already see the cycle of weird, bad and misleading definitions starting up there.
BAN Superintelligence Until Safe.