"Long" timelines to advanced AI have gotten crazy short
The prospect of reaching human-level AI in the 2030s should be jarring
Welcome to Rising Tide! I’m publishing 3 posts this week to celebrate the launch of this Substack—this is post #1. New posts will be more intermittent after this week. Subscribe to get them straight in your inbox:
It used to be a bold claim, requiring strong evidence, to argue that we might see anything like human-level AI any time in the first half of the 21st century. This 2016 post, for instance, spends 8,500 words justifying the claim that there is a greater than 10% chance of advanced AI being developed by 2036.
(Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways.1 So the fact that “human-level AI” sounds vaguer than “AGI” is a feature, not a bug—it naturally invites reactions of “human-level at what?” and “how are we measuring that?” and “is this even a meaningful bar?” and so on, which I think are totally appropriate questions as long as they’re not used to deny the overall trend towards smarter and more capable systems.)
Back in the dark days before ChatGPT, proponents of “short timelines” argued there was a real chance that extremely advanced AI systems would be developed within our lifetimes—perhaps as soon as within 10 or 20 years. If so, the argument continued, then we should obviously start preparing—investing in AI safety research, building international consensus around what kinds of AI systems are too dangerous to build or deploy, beefing up the security of companies developing the most advanced systems so adversaries couldn’t steal them, and so on. These preparations could take years or decades, the argument went, so we should get to work right away.
Opponents with “long timelines” would counter that, in fact, there was no evidence that AI was going to get very advanced any time soon (say, any time in the next 30 years).2 We should thus ignore any concerns associated with advanced AI and focus instead on the here-and-now problems associated with much less sophisticated systems, such as bias, surveillance, and poor labor conditions. Depending on the disposition of the speaker, problems from AGI might be banished forever as “science fiction” or simply relegated to the later bucket.
Whoever you think was right, for the purposes of this post I want to point out that this debate made sense. “This enormously consequential technology might be built within a couple of decades, we’d better prepare,” vs. “No it won’t, so that would be a waste of time” is a perfectly sensible set of opposing positions.
Today, in this era of scaling laws, reasoning models, and agents, the debates look different.
What counts as “short” timelines are now blazingly fast—somewhere between one and five years until human-level systems. For those in the “short timelines” camp, “AGI by 2027” had already become a popular talking point before one particular online manifesto made a splash with that forecast last year. Now, the heads of the three leading AI companies are on the record with similar views: that 2026-2027 is when we’re “most likely” to build “AI that is smarter than almost all humans at almost all things” (Anthropic’s CEO); that AGI will probably be built by January 2029 (OpenAI’s CEO), and that we’re “probably three to five years away” from AGI (Google DeepMind’s CEO). The vibes in much of the AI safety community are correspondingly hectic, with people frantically trying to figure out what kinds of technical or policy mitigations could possibly help on that timescale.
It’s obvious that we as a society are not ready to handle human-level AI and all its implications that soon. Fortunately, most people think we have more time.
But… how much more time?
Here are some recent quotes from AI experts who are known to have more conservative views on how fast AI is progressing:
Yann LeCun:
“Reaching human-level AI will take several years if not a decade.” (source)
“AI systems will match and surpass human intellectual capabilities… probably over the next decade or two” (video, transcript)
Gary Marcus:
[AGI will come] “perhaps 10 or 20 years from now” (source)
Arvind Narayanan:
I initially had this quote from Arvind:
“I think AGI is many many years away, possibly decades away” (source)
I interpreted this to mean that he thinks 5 years is too short, but 20 years is on the long side. When I ran this interpretation by Arvind, he added some interesting context: he chose his phrasing in that interview in light of what he sees as a watering down of the definition of AGI, so his real timeline is longer. But to clarify what that meant, he said:
“I think actual transformative effects (e.g. most cognitive tasks being done by AI) is decades away (80% likely that it is more than 20 years away).” (source: private correspondence)
…in other words, a 20% chance that AI will be doing most cognitive tasks by 2045.
These “long” timelines sure look a lot like what we used to call “short”!
In other words: Yes, it’s still the case that some AI experts think we’ll build human-level AI soon, and others think we have more time. But recent advances in AI have pulled the meanings of “soon” and “more time” much closer to the present—so close to the present that even the skeptical view implies we’re in for a wild decade or two.
To be clear, this doesn’t mean:
We’ll definitely have human-level AI in 20 years.
We definitely won’t have human-level AI in the next 5 years.
Human-level AI will definitely be built with techniques that are popular today (generative pre-trained transformers, reasoning models, agent scaffolding, etc.).
Every conversation about AI should be about problems connected with extremely advanced AI systems, with no time or attention for problems already being caused by AI systems in use today.
But it does mean:
Dismissing discussion of AGI, human-level AI, transformative AI, superintelligence, etc. as “science fiction” should be seen as a sign of total unseriousness. Time travel is science fiction. Martians are science fiction. “Even many skeptical experts think we may well build it in the next decade or two” is not science fiction.
If you want to argue that human-level AI is extremely unlikely in the next 20 years, you certainly can, but you should treat that as a minority position where the burden of proof is on you.
We need to leap into action on many of the same things that could help if it does turn out that we only have a few years. These will likely take years and years to bear fruit in any case, so if we have a decade or two then we need to make use of that time. They include:
Advancing the science of AI measurement, so that rather than the current scrappy approach, we have actually-good methods to determine what a given model can and cannot do, how different models compare with each other (and with humans), and how rapidly the frontier is progressing.
Advancing the science of interpretability, so we can understand how the hell AI systems work, when they are likely to fail, and when we should(n’t) trust them.
Advancing the science of aligning/steering/controlling AI systems, especially systems that are advanced enough to realize when they’re being tested.
Fostering an ecosystem of 3rd-party organizations that can verify high-stakes claims AI developers make about their systems, so that governments and the public aren’t dependent on developers choosing to be honest and forthcoming.
Working towards some form of international consensus about what kind of AI systems would be unacceptably risky to build, and what kind of evidence would help us determine whether we are on track to build such systems. (Maybe someone should establish some kind of international summit series on this..?)
Building government capacity on AI, so that there’s real expertise within Congress and the executive branch to handle AI policy issues as they arise.
It’s totally reasonable to be skeptical of the “AGI by 2027!” crowd. But even if you side with experts who think they're wrong, that still leaves you with the conclusion that radically transformative—and potentially very dangerous—technology could well be developed before kids born today finish high school. That’s wild.
I routinely hear people sliding back and forth between extremely different definitions, including “AI that can do anything a human can do,” “AI that can perform a majority of economically valuable tasks,” “AI that can match humans at most cognitive tasks,” “AI that can beat humans at all cognitive tasks,” etc. I hope to dig into the potentially vast gulfs between these definitions in a future post.
Personally, my favorite description of AGI is from Joe Carlsmith: “You know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”
See, e.g., this 2016 report from the Obama White House: “The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.”
This is great! Thanks for writing.
Good piece, and excited to see what you bring to this blog!