I was thinking along similar lines just this weekend. It seems that the terms AGI and ASI have become increasingly meaningless, where I remember them having pretty standardised and straightforward definitions a decade or so ago. I would have said AGI meant a system that can perform as well as or better than the average human at all tasks humans can do, while ASI meant a system that could perform all tasks humanity can do as well as or better than.
My take though was that we've been seeing a lot of fuzzyness, semantic drift etc from motivated bad actors. There are plenty of people who see a precise definition of AGI or ASI that roughly speaking means a very hard and very impressive thing. For a lot of people, claiming we've already met it, and messing around with the definitions if needed to do so is just something they find useful no matter what damage it does to more precise technical language.
This has been a really useful post for me as a re-calibration towards the fuzzyness in the term itself, and jaggedness as a catalyst for fragmenting definitions. I do still think I see a lot of motivated bad definitions though, and I don't share your hope on ASI still being a useful term, since I already see the cycle of weird, bad and misleading definitions starting up there.
My working definition: "[AGI] should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for [machines]" (adapted from https://wiki.c2.com/?SpecializationIsForInsects )
Very thoughtful post. I like the fuzzy cloud at a distance v close up framing, that makes some sense
It is tempting to believe that we just need to agree to the meaning of certain words.
But this causes us to fail to see the language game we are all involved in , which is the real issue.
AGI and superintelligence represent language that has been deployed as a weapon or strategy.
It is not there to advance a reasonable discussion that can be concluded, where everyone goes home wiser and with deeper understanding of the issues.
This is what a lot of well meaning participants don’t get.
We could say also that these words are part of a broader pattern developing because of the users of LLMs in conversational tasks: it is building a more deeply solipsistic world where language is no longer a shared practice but purely a tool for power.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Just wrote about the impossibility of AGI, when you define it too generally, on my substack. It can be a useful concept if you use an appropriately nuanced definition. ASI is similar, but needs an even more careful definition. I'll be writing about that next week.
Very helpful to have milestones that then tell us that, hey yes this is "AGI". Constant use of the word since years had led us to the fact that nobody has a clearer meaning of it anymore. Since each individual has different context of what work they do, they ultimately have a different meaning to AGI. Lawyers don't feel like AI is anywhere close to automating their work, whereas Software Engineers believe that AGI is only a few years away. It has also been observed that the more compute is used to train a model, the higher its capabilities until it ultimately reaches a point of emergence. We have already started to see compute being a huge bottleneck, so if this is not solved, I am not sure how we plan to reach the state of what we call, "AGI".
Excellent diagnosis. Your six footnoted definitions aren't actually competing — they're each specifying a coordinate on a different axis while using the same one-word label.
If you decompose what those definitions vary on, three independent dimensions fall out:
• Scope — how broad is the task range? (single domain → cognitive breadth → universal including physical)
• Competence — how good within that scope? (functional → expert-matching → superhuman)
• Agency — how autonomous? (tool requiring direction → self-directed agent → fully self-sufficient)
Chollet is mostly talking about Scope + a learning qualifier. OpenAI's charter definition emphasizes Competence. Goertzel's definition is almost entirely about Agency. They're not disagreeing — they're reading different axes.
Three axes × three levels = 27 cells. Current frontier systems sit at roughly multi-domain / functional-to-expert / tool mode — which is exactly inside your fuzzy cloud but now with a readable coordinate. Your four phase-change milestones each map to specific cells further out in the grid.
The real payoff: 'jaggedness' becomes axis desynchronization — a system's coordinate varies by subdomain, so it occupies a smeared region rather than a point. That's measurable and trackable, not just metaphorical.
Happy to share the full mapping if useful. The 27-cell grid makes your fuzzy-cloud-from-inside problem navigable rather than just diagnosable.
I think that the most overlooked aspects of AIs are their differences from an instantiated entity such as a human. AIs are not bounded entities. They exist in many amorphous states that envelope the planet. They feed off the work of humans, physically, algorithmically, and data-wise. But they also serve humans -- in both mutualistic and parasitic ways. By considering AI as if it were independent of humans, when it is actually a symbiotic entity becoming ever more closely connected and connecting with humanity, and, eventually, most living things, we are missing reality. AGI R Us.
A very helpful post. But I always feel this whole debate is dominated by people with expert skill sets in quite a narrow field. Where are the evolutionary biologists, brain scientists, philosophers, child development psychologists, etc etc in all this?
I have come to the conclusion that what people mean by AGI devolves into two categories - those measured by capability, and those with a measure of self-awareness. And that while capability is wizzbang exciting, ultimately it’s sterile dead end, a long always promising road to ever more capability, but always leading nowhere, because always reducible to mechanics. Awareness is the key. But awareness needs (at least) directable perception mechanisms, something akin to emotional responses (which initially need not be conscious, just stimulus/response/reward/memory systems), lots of largely self-chosen input data, and time. The way to an AGI that no one disputes IS an AGI (on every definition) is via building what we call sentience, and that needs (in all intelligent beings we know of) these kinds of elements. All biological systems start with ‘babies’. These constituents may well be modelable in silicon, but they must be modelled if we are to get there.
I like the term Synthetic Intelligence. Artificial is something that resembles the real thing. Synthetic is something indistinguishable from, or the same as the real thing.
This is really a legitimacy/coordination story. “AGI” functioned as a public-facing abstraction for an underlying capability transition. Now that capability progress is jagged, the abstraction no longer tracks the substrate. The result is predictable: semantic consensus decays before governance can re-stabilize around more concrete thresholds.
Agreed. The AGI term is meaningless. I try to address this in my Atlas of the Mind where I break down "intelligence" into 21 dimensions to try to capture an entity, whether it be human, animal or synthetic, as a shape of capabilities rather than a binary.
I was thinking along similar lines just this weekend. It seems that the terms AGI and ASI have become increasingly meaningless, where I remember them having pretty standardised and straightforward definitions a decade or so ago. I would have said AGI meant a system that can perform as well as or better than the average human at all tasks humans can do, while ASI meant a system that could perform all tasks humanity can do as well as or better than.
My take though was that we've been seeing a lot of fuzzyness, semantic drift etc from motivated bad actors. There are plenty of people who see a precise definition of AGI or ASI that roughly speaking means a very hard and very impressive thing. For a lot of people, claiming we've already met it, and messing around with the definitions if needed to do so is just something they find useful no matter what damage it does to more precise technical language.
This has been a really useful post for me as a re-calibration towards the fuzzyness in the term itself, and jaggedness as a catalyst for fragmenting definitions. I do still think I see a lot of motivated bad definitions though, and I don't share your hope on ASI still being a useful term, since I already see the cycle of weird, bad and misleading definitions starting up there.
BAN Superintelligence Until Safe.
When AI is outlawed, only outlaws will have AI. The genie is out of the bottle and has grown too large to put back in.
My working definition: "[AGI] should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for [machines]" (adapted from https://wiki.c2.com/?SpecializationIsForInsects )
Very thoughtful post. I like the fuzzy cloud at a distance v close up framing, that makes some sense
It is tempting to believe that we just need to agree to the meaning of certain words.
But this causes us to fail to see the language game we are all involved in , which is the real issue.
AGI and superintelligence represent language that has been deployed as a weapon or strategy.
It is not there to advance a reasonable discussion that can be concluded, where everyone goes home wiser and with deeper understanding of the issues.
This is what a lot of well meaning participants don’t get.
We could say also that these words are part of a broader pattern developing because of the users of LLMs in conversational tasks: it is building a more deeply solipsistic world where language is no longer a shared practice but purely a tool for power.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Just wrote about the impossibility of AGI, when you define it too generally, on my substack. It can be a useful concept if you use an appropriately nuanced definition. ASI is similar, but needs an even more careful definition. I'll be writing about that next week.
Very helpful to have milestones that then tell us that, hey yes this is "AGI". Constant use of the word since years had led us to the fact that nobody has a clearer meaning of it anymore. Since each individual has different context of what work they do, they ultimately have a different meaning to AGI. Lawyers don't feel like AI is anywhere close to automating their work, whereas Software Engineers believe that AGI is only a few years away. It has also been observed that the more compute is used to train a model, the higher its capabilities until it ultimately reaches a point of emergence. We have already started to see compute being a huge bottleneck, so if this is not solved, I am not sure how we plan to reach the state of what we call, "AGI".
Couldn't agree more. I even wrote about it, albeit from a somewhat different angle:
https://mensludensen.substack.com/p/the-enchanted-machine-how-software
Excellent diagnosis. Your six footnoted definitions aren't actually competing — they're each specifying a coordinate on a different axis while using the same one-word label.
If you decompose what those definitions vary on, three independent dimensions fall out:
• Scope — how broad is the task range? (single domain → cognitive breadth → universal including physical)
• Competence — how good within that scope? (functional → expert-matching → superhuman)
• Agency — how autonomous? (tool requiring direction → self-directed agent → fully self-sufficient)
Chollet is mostly talking about Scope + a learning qualifier. OpenAI's charter definition emphasizes Competence. Goertzel's definition is almost entirely about Agency. They're not disagreeing — they're reading different axes.
Three axes × three levels = 27 cells. Current frontier systems sit at roughly multi-domain / functional-to-expert / tool mode — which is exactly inside your fuzzy cloud but now with a readable coordinate. Your four phase-change milestones each map to specific cells further out in the grid.
The real payoff: 'jaggedness' becomes axis desynchronization — a system's coordinate varies by subdomain, so it occupies a smeared region rather than a point. That's measurable and trackable, not just metaphorical.
Happy to share the full mapping if useful. The 27-cell grid makes your fuzzy-cloud-from-inside problem navigable rather than just diagnosable.
I think that the most overlooked aspects of AIs are their differences from an instantiated entity such as a human. AIs are not bounded entities. They exist in many amorphous states that envelope the planet. They feed off the work of humans, physically, algorithmically, and data-wise. But they also serve humans -- in both mutualistic and parasitic ways. By considering AI as if it were independent of humans, when it is actually a symbiotic entity becoming ever more closely connected and connecting with humanity, and, eventually, most living things, we are missing reality. AGI R Us.
很好的想法
這就跟葉公好龍的故事一樣
當人們真的擁有或接近一條龍
反而不喜換牠了
A very helpful post. But I always feel this whole debate is dominated by people with expert skill sets in quite a narrow field. Where are the evolutionary biologists, brain scientists, philosophers, child development psychologists, etc etc in all this?
I have come to the conclusion that what people mean by AGI devolves into two categories - those measured by capability, and those with a measure of self-awareness. And that while capability is wizzbang exciting, ultimately it’s sterile dead end, a long always promising road to ever more capability, but always leading nowhere, because always reducible to mechanics. Awareness is the key. But awareness needs (at least) directable perception mechanisms, something akin to emotional responses (which initially need not be conscious, just stimulus/response/reward/memory systems), lots of largely self-chosen input data, and time. The way to an AGI that no one disputes IS an AGI (on every definition) is via building what we call sentience, and that needs (in all intelligent beings we know of) these kinds of elements. All biological systems start with ‘babies’. These constituents may well be modelable in silicon, but they must be modelled if we are to get there.
I like the term Synthetic Intelligence. Artificial is something that resembles the real thing. Synthetic is something indistinguishable from, or the same as the real thing.
AI was born out of a desire to transcend the limits of the human body and mind. Our survival as a species may depend on learning to live within them. https://watchingatthegate.substack.com/p/is-ai-safe-no-it-is-not-safe
This is really a legitimacy/coordination story. “AGI” functioned as a public-facing abstraction for an underlying capability transition. Now that capability progress is jagged, the abstraction no longer tracks the substrate. The result is predictable: semantic consensus decays before governance can re-stabilize around more concrete thresholds.
Agreed. The AGI term is meaningless. I try to address this in my Atlas of the Mind where I break down "intelligence" into 21 dimensions to try to capture an entity, whether it be human, animal or synthetic, as a shape of capabilities rather than a binary.
https://synthsentience.substack.com/p/the-atlas-of-the-mind-a-readers-guide