The Gospel According to Andreessen
Silicon Valley has embraced a quasi-religious narrative of technological determinism, complete with its own eschatology, moral framework, and conversion imperative.
In his recent essay “Why AI Will Save the World,” venture capitalist Marc Andreessen delivers what might be the tech industry’s most unabashed sermon on artificial intelligence to date. With billions of his firm’s dollars already invested in AI startups, Andreessen doesn’t merely suggest that AI might improve certain aspects of human existence; he declares it our salvation from humanity’s greatest scourges: climate change, disease, poverty, and economic stagnation.
Andreessen’s essay reveals more about Silicon Valley’s self-conception than about artificial intelligence.
What follows is a remarkable exercise in Silicon Valley mythmaking that warrants not just scrutiny but dismantling. For beneath its glossy techno-optimism lies a troubling worldview: one that positions legitimate concerns about AI development as moral failings, recasts democratic oversight as elitist oppression, and transforms complex societal questions into simplistic battles between “progress” and “stagnation.” It is a masterclass in false dichotomies delivered with evangelical certainty—and a perfect distillation of everything wrong with Silicon Valley’s approach to technology governance.
There is something distinctly American in Andreessen’s techno-optimism: a fusion of Enlightenment faith in reason with frontier mythology and Silicon Valley’s particular brand of libertarian capitalism. His essay belongs to a long tradition of technological solutionism that has animated American culture since the nineteenth century. From the telegraph to the Internet, each new invention has arrived bearing similar promises of democratization, abundance, and human liberation. Each has delivered genuine benefits alongside unanticipated complications, often entrenching existing power structures rather than disrupting them.
Is AI Skepticism Elitist?
Andreessen’s framing of AI skepticism as primarily an elite phenomenon represents a curious inversion of reality. The venture capital offices of Sand Hill Road, where a small cohort of elite investors determine which technologies merit development, hardly represent a populist counterweight to institutional power. Indeed, they constitute one of the most concentrated forms of private influence in modern society. Silicon Valley’s attempt to position itself as champion of the common person against regulatory overreach requires a historical amnesia about the industry’s own evolution from countercultural outsiders to central pillars of contemporary capitalism.
The essay’s rhetorical architecture relies heavily on false dichotomies. One either embraces AI’s accelerating development without meaningful oversight, or one opposes human progress itself. This Manichaean framing leaves no room for the nuanced middle ground where most serious AI researchers actually operate: cautiously optimistic about potential benefits while mindful of legitimate risks that warrant thoughtful governance. Andreessen’s characterization of AI concerns as “doomerism” conveniently ignores crucial distinctions between apocalyptic scenarios and reasonable considerations about how powerful technologies should be developed, deployed, and governed.
Is AI Regulation Anti-Democratic?
Perhaps most revealing is the essay’s treatment of AI governance. Andreessen presents regulation as fundamentally anti-democratic, an imposition of élite preferences on the general public. This framing conveniently ignores that democratic governance of technology might actually represent the public interest rather than thwart it. The European Union’s AI Act, for instance, focuses primarily on ensuring transparency, accountability, and human oversight in high-risk AI applications—goals that hardly seem aligned with preventing technological progress. But in Andreessen’s framework, any constraint on technological development, regardless of its democratic legitimacy or ethical justification, represents an attack on innovation itself.
The contradiction at the heart of Andreessen’s position becomes apparent when examining his simultaneous claims that AI is revolutionary enough to “save the world” yet somehow not disruptive enough to warrant serious oversight. This paradox reveals what I, and likely many others, see as his essay’s true purpose: not to engage sincerely with complex questions surrounding AI development but to create an ideological shield against any type of resistance to increasingly powerful technologies. Let’s face it: this successful billionaire wants to continue the unfettered money streams. He wouldn’t want something pesky like ethics, prudence, or regulation to get in the way.
Are AI Concerns Coming from a ‘Tiny Vocal Minority’?
Andreessen’s dismissal of AI concerns as coming from a “tiny vocal minority” demonstrates a significant empathy gap, an inability or unwillingness to engage with the lived experiences of those most affected by “technological disruption.” Content creators whose work trains AI systems without compensation, knowledge workers facing displacement, and communities dealing with algorithmic discrimination have legitimate concerns that cannot be dismissed as mere resistance to progress. His essay claims concern for the global poor—an absurd proposition—while showing remarkable indifference toward those currently bearing the costs of technological transformation.
What remains conspicuously absent from Andreessen’s vision is any acknowledgment that technologies reflect the values, biases, and power structures of the societies—and the individuals themselves—that create them. Technologies aren’t neutral tools dropped from the heavens; they embody specific perspectives and priorities. The question isn’t whether to develop AI but how to develop it in ways that enhance human flourishing and dignity. By presenting technological development as an autonomous force separate from social, economic, and political contexts, Andreessen obscures the human decisions, value judgments, and power dynamics that shape how technologies are developed and deployed.
Will AI Really ‘Save’ Education?
The history of technological innovation offers a more complex picture than Andreessen’s unalloyed optimism suggests. Previous technologies haven’t eliminated poverty or fundamentally “transformed power structures”; they’ve been absorbed into existing systems, often reinforcing rather than disrupting socioeconomic hierarchies. The Internet offered decentralization and democratization—I recall believing that myself in the 1990s when I worked directly as a pioneer in that field (It was also the topic of my very first published article, way back in 1995)—but has evolved toward unprecedented concentration of power and surveillance capability. Social media promised connection but has fragmented public discourse while extracting unprecedented value from user attention. Why would AI naturally follow a different trajectory? Could it be that Andreesen simply doesn’t care about any of these unintended consequences? In fact, might he see panopticon surveillance, geopolitical stability, and fragmented discourse as goods?
The irony of Silicon Valley’s current AI discourse is that it recycles tropes from previous technological eras while insisting on AI’s unprecedented novelty. The same promises of abundance, democratization, and liberation that accompanied the personal computer revolution (I was there) and the rise of the Internet (I was there, too) now adorn artificial intelligence. Yet the industry’s track record suggests that new technologies tend to amplify existing power disparities rather than diminish them. They tend to be harnessed by those entities that see ethical considerations as serious inconveniences.
Perhaps what’s most concerning about Andreessen’s manifesto is not its optimism but its certainty, its unwillingness to acknowledge technology’s inherent ambiguity and the necessity of deliberation about its development. Technology’s dual capacity to both liberate and constrain, connect and alienate, solve problems and create new ones requires a more nuanced approach than categorical pronouncements about salvation or doom.
This technological determinism becomes particularly troubling when considered in the context of education. As I have previously argued, the adoption of AI systems in K-12 schools, colleges, and universities is not an inevitability, despite what Silicon Valley prophets might proclaim. The introduction of AI tools in learning environments carries profound implications for cognitive development, student-teacher relationships, and academic integrity. Just because we have developed the technological capability does not mean we are obligated—or even wise—to deploy it throughout our educational institutions. Each application demands careful evaluation of its educational merit, its impact on the teacher-student relationship, and its long-term effects on how young people learn to think, create, and engage with knowledge. But such deliberative processes find no place in Andreessen’s vision, where resistance to technological adoption in any sphere represents not reasoned caution but moral failure. The potential that educational institutions might serve as spaces where technology is selectively and thoughtfully incorporated rather than uncritically embraced appears entirely outside his conceptual framework.
The question that Andreessen’s essay never satisfactorily addresses is not whether AI will change the world—it undoubtedly will—but who will decide how it changes the world, to what ends, and for whose benefit. These are not technical questions but political ones, concerning power, values, and the kind of society we wish to create. By framing AI development as beyond politics, as a simple matter of acceleration versus stagnation, Silicon Valley’s prophets attempt to place these crucial decisions beyond democratic reach, a maneuver that itself represents a profoundly political act.
In the end, Andreessen’s essay reveals more about Silicon Valley’s self-conception than about artificial intelligence. It demonstrates how thoroughly the industry has embraced a quasi-religious narrative of technological determinism, complete with its own eschatology, moral framework, and conversion imperative. For all its forward-looking rhetoric, this worldview remains surprisingly traditional in its structure: a faith-based system that promises salvation through surrender to forces beyond ordinary human control or understanding. What it offers is not so much a vision of the future as a mythology of the present, one that casts Silicon Valley’s accumulated power not as a historical contingency to be examined but as the natural and inevitable order of things.
Michael S. Rose, a leader in the classical education movement, is author of The Art of Being Human, Ugly As Sin and other books. His articles have appeared in dozens of publications including The Wall Street Journal, Epoch Times, New York Newsday, National Review, and The Dallas Morning News.
When I hear someone claim this or that system is our salvation, I immediately question his motives or his insight.
Thanks, Michael for a well-needed breath of air. There’s a lot of hyperventilation these days. One of your implicit ideas – there are only tradeoffs, not decisions – shows the folly of Andreessen’s assertions. In the end, SCIENCE™ doesn’t answer big questions, it only details the tradeoffs. And even that it does only in part.
In this post I’ll steer clear of Paul Kingsnorth’s writing over the past three years in his “Machine” musings. While I find these compelling, to the point that I believe them to be fundamentally correct, I’ll not go down that road today. Suffice to say That Hideous Strength wisely anticipated today’s situation. Lewis shrewdly gives us neat examples of the different types of characters who are drawn to such ideas.
Please forgive my stream-of-inanity comments. Too many thoughts competing for limited “computing resources.”
As I’ve noted here before, human beings are no more computers than are they front-end loaders. Yes, we compute. And we dig, too. What people like Andreessen call AI is simply (!) exceptionally complex computing. Dangerously it’s so complex that it can become unpredictable in sometimes dangerous ways. While people like Andreessen adopt an I’m-just-much-smarter-than-you-so-shut-up posture, that’s what it is. They gin up ideas like “the coming singularity” to intimidate the masses. They may even believe those ideas themselves (see below).
First, AI is not a thing. It’s an umbrella term. Like “cancer” it covers an extraordinary range of different things. We’d be far better served by having a much better typology to describe each different manifestation. But the world’s Andreessens wouldn’t. There’s a great EconTalk podcast with Rodney Brooks, who helped found MIT’s AI Lab, from six or seven years ago. He deconstructs much of the hype.
Just as a robot need not be a humanoid machine, AI has all sorts of manifestations. Using this blanked term betrays a low-resolution understanding. It lets those of us who only know a little opine with seeming intelligence. It also lets those like Andreessen obfuscate and browbeat their critics.
This “AI-and-Singularity-ing” is akin to the wholly unprovable “Many Worlds Hypothesis.” It’s an agree-or-shut-up ploy.
Second, while we already “benefit” from AI, much of what is touted is much further away than we hear. When I attended college in the ‘70s, we all knew that fusion power was only a few decades away from commercial use. And where are those self-driving cars? They’ll arrive, but not on the timetables touted ten years ago.
Third, let’s turn to Andreessen’s optimism about managing the societal implications of AI. Since I’ve mentioned robots and science fiction, let’s consider vintage 50s and 60s sci fi – Isaac Asimov’s, I Robot. Asimov’s robots were indelibly programmed to follow The Three Laws of Robotics. Much of his writing highlighted how even these three simple rules created extremely disturbing problems and unresolvable contradictions.
I wouldn’t be surprised if we find that lawyers prove to be among the greatest impediments to Andreessen’s Utopia.
Finally, while I share your concerns about the centralization of power, I retain hope. I was listening to a podcast a month or so ago. The guest noted that Gutenberg’s printing press was quickly adopted by “the ruling class” to exercise greater control over society. That worked for decades. Eventually, however, it became the tool of widespread rebellion against those rulers. See, for example, Martin Luther. As a Roman Catholic I may have mixed feelings about the metaphor, but…