A new fault line is running through American life, and it has nothing to do with the usual categories of race, class, or political affiliation — though it is beginning to absorb all of them. It is the divide between those who believe artificial intelligence is the most transformative tool since the printing press and those who think the whole enterprise is somewhere between overblown and genuinely dangerous.
Axios recently put a name to what many have been sensing: AI is sorting the country into three distinct camps — power users, doubters, and resistors. What the headline missed is the deeper story. This is not really a debate about technology. It is a debate about who gets to define reality, who benefits from disruption, and whether the people running the AI revolution have any accountability to the rest of us.
The technical community has its own vocabulary for this divide. Andrej Karpathy — the former Tesla AI director and OpenAI founding member who coined the term “vibe coding” — recently posted on X that AI’s power users and skeptics are “speaking past each other.” His diagnosis is astute: a person who briefly tried a free, outdated version of ChatGPT and found it unreliable is operating in an entirely different experiential universe than an engineer paying for Claude Code or OpenAI’s Codex and watching it solve in minutes what used to take days.
“The thing is that these free and old/deprecated models don’t reflect the capability in the latest round of state of the art agentic models of this year,” Karpathy wrote.
The gap, he explained, is partly technical — AI models have improved most dramatically in coding and mathematics, domains with verifiable right answers, rather than in writing or general search, which are the tasks most ordinary users associate with AI. The result is a bifurcated public: one group laughing at chatbot hallucinations, the other watching in something close to awe as machines solve PhD-level problems autonomously.
But Karpathy’s framing, while technically accurate, contains a buried assumption worth examining. It treats the power-user perspective as epistemically superior — as though the skeptics simply lack information and would come around if they could only afford the premium subscription. This is the classic Silicon Valley condescension dressed in empirical clothing. The ordinary American who watches a company replace customer service workers with a chatbot is not suffering from ignorance. He is suffering from proximity. He has seen the application. He knows who benefited and who didn’t. His skepticism is not a measurement error; it is a data point that the techno-optimists prefer not to model.
The polling confirms this at scale. A recent Fox News survey found that 67% of Americans carry serious concerns about AI’s consequences. Global polling firm FGS Global, which surveyed 20,000 people across the U.S., U.K., Canada, the European Union, and Japan, found a consistent pattern across every nation: elites are more optimistic about AI and more hostile to regulating it, while non-elites favor oversight and fear job displacement.
Young people in every country surveyed said they see their economic futures as threatened by the technology. A Pew Research Center report found that 56% of AI experts expect positive long-term outcomes for the country — while only 17% of the general public agrees. When the people building a technology and the people living with it disagree by 39 percentage points, that is not a communication problem. That is a legitimacy problem.
What makes this cultural moment particularly revealing is the way the elite AI class has begun to respond to the skeptics. Former AI czar David Sacks — a venture capitalist turned White House official — has been blunt: “The Doomer narratives were wrong.”
Senior policy advisor Sriram Krishnan echoed that the notion of imminent catastrophic AI risk was “a distraction and harmful and now effectively proven wrong.” The “doomers” they’re dismissing were, in many cases, the researchers and ethicists who raised questions about accountability, job displacement, and the concentration of power in a handful of private labs. Whatever one thinks of their apocalyptic framing, their underlying concerns — about who controls AI, who profits from it, and what happens to workers in its wake — remain entirely unanswered. Declaring them wrong on the existential timeline is not the same as answering those questions.
Meanwhile, OpenAI itself has quietly acknowledged the scale of the disruption ahead. The company released a policy paper it calls “Industrial Policy for the Intelligence Age,” which proposes — among other things — a national wealth fund, shifting more of the tax burden from labor to capital, and broad social safety net expansions.
Axios, reporting on the document, noted that its proposals resemble Progressive Era and New Deal thinking, and would only become politically viable if AI disruption proved severe enough to scramble existing political coalitions. Read that sentence again. The company building the technology is planning for a future disruption so severe that it might require a political revolution to manage. And the same company is arguing, publicly, that government should not slow it down. The internal contradiction is breathtaking — and almost no one is saying so directly.
The Scripture speaks with precision to this kind of moment. In the book of James, chapter 5, the wealthy are warned: “Ye have heaped treasure together for the last days. Behold, the hire of the labourers who have reaped down your fields, which is of you kept back by fraud, crieth: and the cries of them which have reaped are entered into the ears of the Lord of sabaoth.”
The specific mechanism changes across centuries — field labor, factory work, white-collar employment — but the structure remains the same: productivity gains flow to those at the top, the workers bear the cost of transition, and the powerful spend considerable energy explaining why this is actually good for everyone. AI is not a new story. It is an old story told with new vocabulary.
None of this means artificial intelligence is inherently malevolent or that the skeptics are right about everything. Karpathy is correct that there is a genuine capability gap between what casual users have seen and what professional power users are experiencing. The technology will likely do real good in medicine, scientific research, and fields where human attention is the bottleneck. The question is never whether a technology has benefits. The question is always who captures those benefits, who absorbs the costs, and whether the people making the decisions can be held accountable when they get it wrong.
On all three counts, the current AI moment offers troubling answers. The benefits are flowing to a narrow class of investors and engineers. The costs — job displacement, wage stagnation, the psychological toll of a world reorganizing itself faster than human institutions can adapt — are being distributed broadly and borne disproportionately by those with the fewest options. And accountability is, to put it politely, not the industry’s strong suit.
What Axios captured in its three-camp framework is real, but it is incomplete. This is not simply a story about different levels of familiarity with technology. It is a story about power — about who gets to define what counts as progress, who decides the pace of change, and whether the rest of society has any say in those decisions. The AI elites are not wrong that the technology is transformative. They are wrong to treat that transformation as self-justifying. History has never vindicated the argument that disruption, simply because it is technologically impressive, is therefore good. The workers who built the railroads did not automatically share in the wealth of the Gilded Age. The factory hands of the Industrial Revolution did not thrive because the machines were remarkable. Progress without accountability is just power with better marketing.
The divide Axios describes will not be closed by better onboarding tutorials or cheaper subscriptions. It will only close — if it closes — when the people steering this technology are required to answer to someone other than their investors. Until then, the three camps will keep drifting apart: the power users marveling at what the machines can do, the skeptics watching what the machines are doing, and the resistors concluding that no one at the top is asking the right questions. On that last point, at least, the resistors may have the clearest view of all.


