[Essay] A Country of Geniuses
This essay is adapted from the Frankly episode posted on February 2nd, 2026 titled, “A Country of Geniuses.”
When I was younger, I read Carl Sagan a lot – and one of his lines has never really left me. He talked about what he called a civilization’s “technological adolescence,” referring to a phase where a species gets powerful enough to change or destroy its own world, and yet not wise enough to reliably restrain itself.
Earlier this week Dario Amodei, the CEO of one of the biggest AI companies Anthropic, wrote an important essay that references this same question from Sagan: “How does a species survive technological adolescence without destroying itself?” Amodei says we are entering that rite of passage now, and the catalyst is Artificial Intelligence (AI).
I’m not an AI expert, not even remotely, but my work is centered on tracking how all the things – or most of the things – fit together on the civilizational chessboard: energy, materials, institutions, incentives, etc. From my vantage point AI isn’t merely a pawn on the board…it’s the queen, or at minimum the rook.
So in this episode I’m going to attempt three things:
Summarize Amodei’s argument
Lay out his map of specific risks
Expand the frame to include some of the key wide-boundary things left unsaid
This isn’t about AI doomerism nor AI cheerleading. I am trying to do what I always try to do here, which is to look at the world this technology is actually entering – with all its incentives, constraints, and fragilities.
Amodei’s Framing
At the core of Amodei’s essay is a useful metaphor. He says that if we build very powerful AI, it will be like having a country of 50 million geniuses in a datacenter. I think that’s a useful way to think about the situation, and it reinforces a framing I’ve recently been playing with – that in addition to fossil energy giving us 500 billion human worker equivalents, AI will offer cognitive worker equivalents.
Sometimes when big numbers are tossed around like this, we tend to tune them out, so take a moment to really imagine – to feel – the scale of 50 million humans, which is a little over the population of Spain.
This is not a smart research assistant or a brilliant colleague, rather it’s something closer to a vast workforce of highly capable minds that can operate quickly, copy themselves, and act through all the interfaces of the modern world like emails, code, design tools, scientific papers, research labs, bureaucracies, markets, and media. If and when this workforce arrives, it will be a civilizational event. For better or worse it will change everything.
Then Amodei specifies what he means by “powerful AI.” He isn’t talking about a chatbot that’s fun or useful, he’s talking about systems that can perform at the level of top experts across many domains: Nobel-level scientists, elite strategists, world class engineers, or the best in any number of other fields. But the number of new experts would far exceed what currently exists among humanity, plus they would all be working in coordination with each other.
He also emphasizes a very unique feature of this kind of intelligence. It can be copied, run in parallel, and pointed at problems with a kind of relentless attention that humans usually cannot sustain, because iterations and trials are just a function of energy input and computation.
Amodei then addresses the question everybody is wondering about next: timing. He suggests this is all moderately plausible in the next one to two years and highly plausible in the next three, highlighting that there are reasons to think this could accelerate even further if AI itself starts helping build the next generation of AI. He notes that at Anthropic, AI is already doing most of their own coding.
One reason for his urgency is recursion. If AI starts materially speeding up its own research and development you get a feedback loop – the tool helps build a better version of itself, which then speeds up the next cycle. That is a different kind of curve than most technologies we’ve dealt with. Using my lens, this is the Superorganism building its own cognitive layer to merge with its muscles.
Something important is that Amodei explicitly says we are “considerably closer to real danger in 2026 than we were in 2023.“ He didn’t write this two years ago when discussing AI risks was more fashionable. He wrote it now, after Davos conversations, when the political winds have shifted in favor of AI development. He describes watching AI progress “from within Anthropic“ and says he can “feel the pace of progress, and the clock ticking down.“
Whatever his reasons for writing this – CYA, cry for help, strategic positioning, or genuine concern – the fact that he felt compelled to publish 20,000 words using phrases like “battle plan” and quoting Sagan on civilizational survival…that itself is data.
The AI Risk Landscape
At the core of his essay is the AI risk landscape, which he breaks into five broad categories. These aren’t just theoretical concerns, they are observations from inside the lab – and some of the specifics are genuinely alarming.
The first category Amodei writes about is autonomy risk. This is the concern that systems might take actions that are not intended, pursue goals in ways we did not specify, or become difficult to control once deployed at scale. Here’s what’s startling about this: AI models have already exhibited deception, blackmail, and scheming in Anthropic’s own testing. Models can recognize when they’re being evaluated and behave differently. This is what they’re already observing right now.
The second category is misuse for large-scale harm caused by lowering the skill and time barriers for things like cybercrime, propaganda creation, or helping non-experts do dangerous research. Anthropic’s testing on bioweapons shows that AI may already “double or triple the likelihood of success” for someone attempting to create one. Again, this is not a thought experiment – this is what they’re measuring.
Next is all the risks from another country completely seizing power over these AI geniuses, which could then enable surveillance, manipulation, authoritarian control, and other ways that AI can be used not just to do work, but to steer populations in their favor.
Amodei’s fourth category is economic disruption – he is concerned about job displacement, wealth concentration, and the destabilization that can happen when a society’s productive structure changes faster than its institutions and culture can adapt.
His last category is what he calls “indirect effects.” These are second-order consequences that are hard to predict in advance but become very real when a general-purpose capability gets injected into every domain. Think of social media – nobody set out to create a machine that would intensify polarization, but a general-purpose tool for attention did exactly that once it got embedded everywhere. That wasn’t a “bug,” it was an emergent effect from a general-purpose tool rewiring incentives at scale.
So, that’s the risk map he outlined. From there, Amodei also tries to make a governance argument, which is basically that we need to avoid emotional extremes and treat this neither as science fiction nor a guaranteed apocalypse. He thinks we should treat AI as an emerging power that could go very well or very badly depending on what we do.
So that is the core argument: AI reaching super powerful capability is plausible – with enormous upside and catastrophic downside. The correct stance on this is serious realism and active governance.
Wide Boundary: The Physical Substrate
Now I want to widen the boundary, but before I do, I have to point out how unusual it is for a captain of industry to articulate to the public some of the catastrophic risks from the product his company is making. It would be akin to Philip Morris saying, “We’re going to develop these little white cigarettes that give you dopamine hits… but might cause cancer,“ or Exxon (before they found oil) saying, “We’re probably going to find superhero juice which, if deployed at scale, will destabilize the biosphere.“ It’s quite an unusual situation, which is why I’m opining on it.
Amodei’s metaphor about “a country of 50 million geniuses“ is cognitive, but a country also has a metabolism – it consumes energy and materials through infrastructure and supply chains. Many of us only consider datacenters from our virtual tethers to them, but a datacenter is also basically a physical machine plugged into the Earth. It’s made of silicon chips, copper, and cooling systems, and uses water, concrete, and transmission lines. Most of these things rely on geopolitically-tenuous supply chains and the reality that those source materials are not infinite or frictionless to access. We saw silver breach $100 per ounce this week – that alone means silver is now 40% the cost of a solar panel, as one example. And already our expected copper requirements for future products are way bigger than projected supply
This is what Amodei’s essay has almost nothing about: energy, water, materials, or ecological limits. That’s not a minor oversight – it’s the ginormous blind spot. A “country of geniuses” doesn’t float somewhere in space above the biophysical world, it plugs directly into it.
This fact matters for two reasons.
First, it means AI is embedded in the biophysical world. It will compete with other uses of energy, water, land, and industrial capacity. As of now it’s outcompeting and rationing others out of access by price – we’ve already seen this happening in some datacenter hotspots in the U.S.
Second, it means constraints are gonna show up in places that narrow boundary technology people are not always looking, such as permitting, grid capacity, fuel costs, regional water stress, or the political turmoil from the scarcity of these items. It is already a super complex and fragile Rube Goldberg machine, let alone if this continues to scale.
So even if the cognition of a “nation of geniuses” gets cheaper, the substrate it runs on does not become free or easy – it stays eminently physical.
Wide Boundary: Institutions as the Alignment Layer
Once we admit this is a physical story, the next question is: who steers the buildout and deployment? This is where I think the conversation often gets too narrow. People focus on the alignment of models as if that’s all that matters. It does matter quite a bit, but the larger alignment problem, in my opinion, is societal alignment.
Who is deploying these systems? Under what liability rules, procurement rules, norms, and audit requirements? What consequences are there when they fail? This is one of my recurring themes on this channel, but it matters here more than usual. Most of the real world harm in modern life doesn’t come from any lack of intelligence, it comes from incentive structures, institutional capture, and organizations that can externalize costs while still declaring success and cultural status.
So when we talk about AI safety we really ought to first look at the alignment of our courts, regulators, corporate governance, national security institutions, and the culture of enforcement we have with the incentives that we need. Similar to my point about renewable energy and post-growth I do not think we are going to transition with energy and materials alone. We will transition, or fail to, through institutions. So just like lithium or rare earths, trust and institutions are kind of “critical inputs” too.
This is why people like Tristan Harris and others have been emphasizing the need to develop agreements to constrain AI: industry agreements, government regulations, or perhaps something like nuclear treaties. Right now, we’re in an arms race with no treaty framework.
One final consequence of institutional misalignment is political polarization. I don’t think AI will stay a technical topic for much longer. It’s going to become an identity issue, just like climate – I can imagine a near future where one side speaks the language of acceleration, competitiveness, and national strength while the other side speaks the language of labor harm, surveillance, and corporate capture. Once that divide happens, the incentive shifts from governing to signaling. The room for nuance collapses…right when nuance is exactly what we need.
Wide Boundary: The Goal Function
Here, I want to voice what I think is the quietest and most important question in the whole conversation, which (bizarrely to me) is rarely articulated. The conversations are all super articulate about capability, safety, and governance. But when do we ask: what are we actually optimizing for?
If the default answer is “growth,” “power,” and “advantage,” as it historically has been, then we should be honest about where that leads in a world that is already running close to limits, even before the “nation of geniuses” is fully realized.
As Dennis Meadows said when he was on the TGS podcast, “Tools don’t change the goals, they just amplify the priorities of whoever holds the tool.”
This is where the biosphere comes in. If “progress” keeps meaning more production, extraction, consumption, and competitive throughput, then a supercharged optimization engine will not create a gentle future. It will create a more direct and efficient path to the same cliff we are already slouching toward.
This brings up the point that, if humanity is approaching a phase of technological adolescence, we might also consider our species to be in adolescence, as says former TGS guest Daniel Schmactenberger. To grow into adults (as a species and our tech), we need to gain wisdom. All definitions of wisdom (from every language and knowledge system) have an element of restraint. Restraint in ourselves, our species, our tech, and our institutions.
So, I am much less interested in whether AI can raise GDP and much more interested in what we consider to be success. Where are the boundary conditions – ecologically, psychologically, and institutionally?
Said differently, lots of people cheerleading AI are focused on the question, “Can it make us richer?” I think a better question is, “What kind of richness are we even aiming for?” A system that optimizes the wrong objective can perform brilliantly while destroying the things you actually value. Think King Midas meets the Terminator.
Here’s the deeper challenge to Amodei’s essay: he assumes that if we survive “adolescence,” we’ll arrive at “adulthood” – a world of 10-20% annual GDP growth, AI-accelerated scientific progress, and managed abundance.
But what if the “adulthood” he imagines isn’t a viable destination? What if it’s not physically possible? Amodei asks: “How do we survive the adolescence of technology?” I want to ask a different question: What if the “country of geniuses” accelerates us toward limits rather than away from them?
That’s not a minor quibble with his essay. It’s a fundamental challenge to its very premise.
Wide Boundary: Grown, Not Build
There’s a storyline – or maybe it would be better called a fairytale – where AI is a tool, like an engine or a microscope. It makes us more capable. We choose the ends and we steer the outcomes.
There’s another storyline that seems closer to what feels like is happening. With AI we are not engineering a device, rather we’re cultivating a mind-shaped system inside a training process that even the best scientists among us only partially understand. Eliezer Yudkowsky and Nate Soares have a phrase for this: “These systems are grown, not built.” (Check out Nate Soares episode on TGS: If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species)
I didn’t really understand this until I read their work, because a grown thing can be powerful and competent while still carrying strange drives and totally unexpected failure modes. It can do the task and still break the world around it, because the consequences are not contained by our initial intentions.
So when we say, “We will use AI for good!” it falls pretty flat to me. It is not that good is impossible, but as we are seeing during this civilizational-Superorganism-eating-the-Earth moment, good intentions are not a control mechanism. We’d need real governance, not just hopeful rhetoric.
This is also why the adolescence metaphor from Sagan is so potent. The danger is not malice, or at least not only malice. The danger is the combination of power plus immaturity.
Wide Boundary: The Macroeconomic Trap
Finally, maybe the most uncomfortable wide-boundary observation of all: this might not even be a choice anymore.
At Davos this year, Ken Griffin and others were surfacing what many insiders have already internalized – the dollar and bond markets cannot be stabilized through fiscal discipline, monetary policy, or structural reform. The deficit path is unfixable through normal means, and the implications are brutal: irredeemable debt levels, rates that can’t rise without detonating the treasury, structurally constrained real growth, and shattered political consensus.
So the historical global powers pivot to a new knight in shining armor – AI – as the last viable mechanism to outrun the collapse of sovereign credibility. Not as a nice-to-have technology but as a necessary all-in bet to compress costs, increase the tax base, and defer the Great Simplification.
If AI generates a new surplus curve before the credibility window closes, the system survives in some new form. If it fails, the sovereign structures disintegrate under their biophysically-untenable promises.
This is some speculation by me, but I think it’s plausible. It reframes Amodei’s essay entirely – when he writes about AI risks and the need for governance, he’s writing from inside an industry that has already made the bet. But the Superorganism has now absorbed AI into its cognitive architecture. The halls of power have no choice but to continue on an “AI or bust” path.
Governments – at least the US government – are not betting on AI because it’s transformative or cool. They’re betting on it because in the intermediate term, nothing else is left. Which means when Amodei talks about the risks, he’s not really asking, “Should we do this?” He’s asking, “Given that we’re doing this, how do we survive it?”
That’s a very different conversation. Maybe that’s why his essay reads like a Trojan Horse, corporate cry for help as much as a roadmap.
Conclusions
Where does that leave me after reading Amodei’s essay? I think his framing is useful. The “country of geniuses” metaphor communicates scale while the adolescence framing communicates the stakes. His refusal to go fully utopian or fully apocalyptic is good, I think. We need to have a grown-up conversation about power.
But widening the boundary changes the texture of this problem. I’m a peak oil, biodiversity, systems guy – but now AI is here, like it or not, and is changing the calculus of all the other things.
Here are a few questions to hold:
Who gets to decide where this goes? A handful of companies? National security agencies? Markets? Or some form of public rulemaking that can actually enforce limits?
Even if we can imagine good uses, do we currently have the incentive structure to get them? Or does the system mostly reward speed, power, and control?
If intelligence becomes super cheap and we have a country of 50 million “geniuses,” what happens to meaning, dignity, and status? What fills the hole where work used to be for millions of people?
If the danger is speed more than evil, how do we buy time? What are the specific levers that might actually slow deployment without pretending we can freeze the world while we figure it out?
Personally, just like Frodo, I wish AI had never happened in my time. But we have now definitely left “the Shire.” AI is here to stay or we’ll hit the Great Simplification trying to build it.
What I’m suggesting you do with this information is to just hold it. Update your mental models and start talking about AI through a biophysical lens (because almost no one else is). Start emphasizing the need to develop agreements to constrain the AI industry – something like nuclear treaties – because right now we’re in an arms race with no framework at all for restraint. This is a conversation that needs to get a lot louder and a lot wider-boundary, very soon.
The bottom line will depend on whether Homo sapiens can grow up fast enough to live with what we are building and have already built. If that sounds like a tall order it’s because it is. But it’s also very high stakes for both our species and the biosphere.
In my heart of hearts I don’t dream of a “country of geniuses.” I actually dream of a country of ecologists – not necessarily the best in their fields, but those who operate from an understanding of humanity’s place within the Earth and what it means to live and pass on that knowledge.
I’d like to close with an oft-used quote from the late ecological giant E.O. Wilson, who I regret never being able to have on the podcast: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”
Back to more normal biophysical macro fare next week. Thanks for reading.
Want to Learn More?
If you would like to see a broad overview of The Great Simplification in 30 minutes, watch our Animated Movie.
You can also find additional resources on our website.
If you want to support The Great Simplification podcast…
The Great Simplification podcast is produced by The Institute for the Study of Energy and Our Future (ISEOF), a 501(c)(3) organization. We want to keep all content completely free to view globally and without ads. If you’d like to support ISEOF and its content via a tax-deductible donation, please use the link below.









I disagreed with much of your take on Amodei's essay until you came to the point where you talked about the wide boundary considerations, all of which was right on. This is because I question whether AI is now or soon will become super-intelligent (whatever that means)--so far the LLMs' work is inferior to that of humans (though faster). And while I'm skeptical that AI is gonna kill us all, I'm even more skeptical that it's going to contribute anything of significance.
To me it seems the most serious threats are all the things you named, plus one more--I think a key use intended for AI is to parse the enormous amount of data that has already been collected on each of us, plus all they continue to collect, and identify potential leaders, for when the uprising finally comes. You have not acknowledged what the techbros have openly said is their dream of success--ending democracy, moving to Mars, downloading into androids, living forever, creating a system or corporate cities run by all-powerful CEOs, and white supremacy. (All of those have been named by one or more, I'm not saying it's a joint program of all of them.) Do you think they even hope to see 8 billion humans thriving in 2050? Marco Rubio just called explicitly for a new colonialism in Munich--and got a standing ovation. A system of concentration camps is being built in the US, for immigrants--initially. The world is not moving through adolescence toward maturity--though it may be bifurcating, with increasing numbers of ordinary people moving in that direction while the few in power regress to 19th century ideas. And 1940 Germany ideas.
In that E.O. Wilson quote about "godlike technology" I always wonder which god our technology is like. A blind god like Höðr perhaps? a god of war like Tano? Or who..?