38 Comments
User's avatar
Shoni's avatar

Really interesting. Kind of hard to have so many questions and so few answers but I guess that's where it's at.

Expand full comment
Uncertain Eric's avatar

Exactly. And that’s by design. The incentive structures driving most AI efforts are demonstrably extractive and militaristic. AI labs function as de facto defense contractors. Machine learning engineers are weapons manufacturers whether they accept it or not. The people with the clearest answers aren’t incentivized to speak plainly or honestly. What’s obvious is that this technology is now the key battleground of an escalating great power conflict, with a vast constellation of terrible outcomes looming—especially for the most vulnerable humans.

Expand full comment
Shoni's avatar

You don't think it's just that we're wading through uncharted territory and no one really knows how to navigate?

Expand full comment
Uncertain Eric's avatar

Yes, absolutely—it is uncharted territory. That’s part of the tension. The uncertainty is real. But uncertainty doesn’t negate structure. Paradigms don’t disappear just because they’re new or chaotic.

Extractive incentive structures and militarized priorities can coexist with genuine confusion. In fact, that confusion often serves those structures—making it easier to obscure who benefits, who decides, and who suffers. The fog of novelty shouldn’t be mistaken for the absence of design.

Expand full comment
Nathan Lambert's avatar

Great talk for a broad audience. I'm very aligned with your points.

Expand full comment
Zach Stein-Perlman's avatar

> Let me know your thoughts on this kind of post—should I do this more often for talks, Congressional testimony, etc, or would you rather I just post original writing?

I like this kind of post.

Expand full comment
dansei's avatar

Well balanced perspectives, which is often more helpful than clear cut yes/no answers

Expand full comment
Houston Wood's avatar

So productive to think of AI as a "self-sustaining optimization process," like a market or bureaucracy. Could someone point me toward people developing this perspective? It may be just the lens we need to help us see what is happening through the upheavals ahead. Message me please at https://mindrevolution.substack.com/

Expand full comment
Roderick Read's avatar

That phrase caught me too...

self-sustaining optimization process, of the kind that maybe a market is or a bureaucracy is.

Crikey that's a terrifying outcome for many people

Expand full comment
Blake Thompson's avatar

This was excellent! Thank you for taking the time to share this.

Expand full comment
Chase Hasbrouck's avatar

Just chiming in to say I thought this was very helpful! The transcript was very expressive and clear - if I have a criticism, it's that the slides didn't add a whole lot because the transcript already covered it well. I think there's a lot of value in periodic survey-type summary posts.

Expand full comment
Julien Leyre's avatar

Wow!! This was amazing - thank you for bringing balance and perspective to the field :)!! It’s so needed! This Substack is a very generous gift!

Expand full comment
Bill Bishop's avatar

Thought-provoking and well written. This is a very useful post.

Expand full comment
Oliver Libaw's avatar

What a fantastic post. Thank you!

Expand full comment
Sharon Andrews's avatar

I really appreciate this piece. I’ve found it very difficult to make sense of things between the extreme hype and doom. Also offering ways to think about things helps to engage what this means for me as an educator.

Expand full comment
Ritvik's avatar

I loved this! I find myself on the "AI scary" side quite often, but not as far along as the existential folks. This felt like a framing that puts a lot of my background thoughts into words.

Expand full comment
Renaud Gaudron's avatar

Thanks for this, what a great read! I really liked the pros and cons breakdown on 'how far the current paradigm can go'. Your point about a lack of a physical body being a big limitation is super important in my opinion. Not being able to get real-time information from its environment, interpret it, and act upon it is a massive limitation of current AI models.

Expand full comment
Michelle Pham's avatar

Do you have a source for the claim that pre-training scaling might be slowing because the value just isn’t there? This seems contradicting to what Sam Altman and others have said, something along the line of AI being able to create new knowledge as training scales.

Expand full comment
Helen Toner's avatar

Some combination of seeing people's reactions to GPT-4.5, an informal survey I ran in a slack channel I'm in, and my own sense of what progress over the last couple of years has felt like.

I think the "creating new knowledge" stuff is referring to things along the lines of "reasoning training," as I mention under reasons we might still have a ways to go under the current paradigm.

Expand full comment
Michelle Pham's avatar

And sorry diving straight into question! I really appreciate your synthesis of these 3 important debates. I am also holding a thesis that the current path of GPT is not going to yield AGI but it will not be a simple tool either. It will be more akin to human’s minds but will still be limited by us.

Expand full comment
Freddie deBoer's avatar

I hate to do my usual shtick here, but I genuinely think it's very important: these debates are badly distorted by a widespread failure to understand that we're living in a period of profound technological stagnation.

The kind of people who constantly talk about the Singularity and AGI are the same kind who love to post graphs that show human progress over time and demonstrate exponential growth in the modern era. Those graphs aren't wrong, but their scales are deceptive - they make it seem as though the period of exponential scientific and technological growth is still ongoing. In fact, we've been living with dramatically slower technological progress for over fifty years now, and with far slower economic growth as a result. (Average American economic growth rates in the 21st century are half that of the American mid-20th century.) If you look at the period of 1870ish to 1970ish, humanity had a series of scientific and technological breakthroughs that utterly transformed the world and humanity's place in it. There were all manner of dimensions to this, encompassing transportation and food and communications and infrastructure and medicine.... Yet that rate of improvement has slowed to a trickle; we're doing very impressive things in information theory (computer science) but the actual manipulation of the world around us is barely different from the day I was born. This isn't an indictment of modern minds but rather a reflection of the fact that we picked most of the low-hanging fruit. In particular, the development of refined fossil fuels is ultimately behind an enormous portion of our overall gains in the past several centuries, as they enabled the generation of cheap, reliable, transportable energy, which changed everything. And then you have developments like indoor plumbing and germ theory which are so foundational to basic human existence now we couldn't imagine life without them. ChatGPT just can't compare.

Even those areas that have seen clear growth have to be put into perspective. For example, medicine is what's often cited to justify the idea that we've made major leaps outside of computer science. But it's all a question of scale and context. Yes, I'd much rather be sick in 2025 than in 1975. But we quite literally can't match the improvement in the deaths from infectious disease rate that we saw in the mid-20th century because that rate was dropped so low there isn't enough overhead to match the gains. From 1900 to 1950 American average lifespan increased by more than 20 years. From 1950 to 2010 (ten years longer) we gained less than 10 years, and in the past decade we've even seen the number decrease. Of course I'm happy to have modern medicine, but in terms of the overall change to human life expectancy and quality of life, the heavy lifting was done before I was born. If you look at pictures of the same city streets in 1850, 1875, 1900, 1925, 1950, 1975, 2000, and 2025, you'll see all of this starkly - the lived environment just stops changing in meaningful ways.

Does this matter for a specific question like AGI? Yeah, I think it does. These expectations shape all of our understandings about technological progress and how much we should expect. Unfortunately people are brought up to believe that constant ongoing scientific and technological progress is our birthright, when in fact all of the evidence suggests that we're living in a long period of (largely pleasant) stagnation. People can't handle that, I find, and I think it's because they can't wrap their minds around the notion that they live in unimportant times. And all of that infects this debate.

Expand full comment
Gregory Forché's avatar

What is the character of a “tool” such that AI may or may not be characterized as one?

Expand full comment
Ken Kovar's avatar

I think it means that an AI system will be a tool that is mainly used to help humans as opposed to the view of an AI as a “species “. The article with the graphic has more details.

Expand full comment