Discussion about this post

User's avatar
Konrad Seifert's avatar

Thanks for writing this up succinctly! The Future and Its Enemies has been on our coffee table for a while, since Chiara finished it. Time I have a look.

I've been thinking about the Fractal Altruism frame a bunch these last weeks and think it captures some of the things you're pointing at here neatly. It also fits my model of complex systems very well. As problems become higher dimensional, autonomous units have to be smaller, not larger, to make sense of the additional variables. Units then develop heuristics to compress their local understanding and communicate with others. If your unit has reason to trust another unit's compression algorithm, you can exchange local insights as abstractions, i.e. coordinate. Thus, units that should coordinate need mutually legible signatures. Yet, most of the time, as units get smaller, their coordination budgets are shrunk, not increased. So my main recommendation is: shrink autonomous unit size while increasing their share of coordinators.

This also maps very well onto David Manheim's critique of Bostrom's Vulnerable World Hypothesis; our world isn't vulnerable, it's fragile - and with increasing fragility, the trick is not to attempt to increase global pressure (i.e. the panopticon approach) but rather to enable lots of local adaptation. https://philarchive.org/rec/MANSFA-3

Expand full comment
TC's avatar

I really like this framing. I loved Postrel's book when I initially read it long ago, but never thought to revisit her ideas here. Thank you!

Expand full comment
13 more comments...

No posts