A friend sent me Nate Silver’s recent post “It’s time to come to grips with AI” and asked my thoughts (thanks man, you know who you are, glad you’re interested in what I think!)

Briefly, Silver, riffing on the Village/River dichotomy in his recent book On The Edge,1 thinks that his tribe, the River people, have a better eye for the vast potential upside and downside of AI than do a group he variously identifies as “Hipsters Skeptics” the Village, and the Left.His general conclusion is that everyone should accept the possibility that AI has vast upside and downside risk, so that we can have serious, pluralist conversations about how to respond to that; I don’t really disagree with that. But I think Silver has the wrong idea of what that ought to look like.

a robot betwixt river and village

There are a few issues with Silver’s analysis. First, some of them are downstream (sorry, river pun) of Silver not having an encyclopaedic understanding of who’s writing on the left about tech. This is not to fault Silver—the “Left” is broad, politically marginalized, and fractured. Many of its members don’t tend to spend a lot of time on Silver’s haunts, Twitter or Substack, and the ones that do are a non-random selection. And I wouldn’t claim to have an encyclopaedic understanding of what left thinking on AI looks like either. Related to this problem is the over-broad generalization inherent to Silver’s River/Village dichotomy. I think he’d grant that it’s only a rough model, and that’s fine, but if you’re going to use a rough model you should have some understanding of its limitations. The worst problems, though, are a failure of imagination of what potential political responses to AI could be. If Silver wants pluralism, it helps to have some sense of what’s already there.

Missing the Left for the Hipsters

Most of Silver’s vitriol is directed at a particular brand of leftist (and it’s important that it’s only particular brand of leftist) that dismisses AI as both harmful and useless. He calls these the Hipster Skeptics, which is a nice characterization. Hipster Skeptics think they are too cool for AI and the nerds building it, that it’s useless and a scam, and will eventually collapse under its own weight. They’re the kind of people who were posting delightedly today about a stock market crash in the wake of DeepSeek’s new models, never mind that, while, yes, the over-valued NVDA lost about 17% of its market cap, the S&P 500 lost only a point and a half, and even the tech-heavy Nasdaq Composite was down only 3 points — more of a sorely needed correction than a crash (if they continue to plummet, I may eat my words, but the Hipster Skeptics have no better crystal balls than I do).

I suspect, though I’m not going to spend time verifying this theory, that Silver sees a lot of this particular brand of leftist because he spends a lot of time on Substack and Twitter, which (1) have been abandoned by many other brands of leftist since Elon Musk bought Twitter and Substack caught flack for not only platforming but also revenue-sharing with Nazis2 and (2) amplify the kinds of contrarian, smarter-than-thou takes that the Hipster Skeptics have.

I think that this kind of critique has a valuable insight at the core — there are going to be some sort of material limits on the capabilities of AI, these limits are often ignored by AI marketers and VCs if not the researchers themselves, and its boosters do have a penchant for acting more like stage magicians than scientists, and much of the marketing of AI has harmful cultural effects that are worth countering with shitposts. But we don’t know where those material limits are, or how much effort AI companies or the AIs themselves, if and when they can be said to be independently agentic, will go to overcome those limits by eliminating human competitors. So the hipster skeptic angle on AI could be qualitatively 100% correct, but miss the mark quantitatively, and we’re still facing paperclip maximizer or Skynet or gray goo scenarios. I’m not going to put all my eggs in that basket, and I don’t fault Silver for not wanting to either.

Where Silver goes wrong is in treating this, implicitly at least, as the entirety of the Left’s/the Village’s thoughts on AI. This is weird, because the Left and the Village are overlapping but not co-terminus. The CHIPS act, which takes seriously a lot of arguments that I think Silver would agree with,3 had a lot of River backers — practically the entire Democratic party, for instance. The academic administrators and tech folks at my university—villagers if there ever were them, I’d wager—have signed a contract giving us all privacy-preserving access to co-pilot (most students either don’t know, or prefer handing their data over to ChatGPT for some reason). And I know of plenty of folks on the left who are experimenting with AI in just the ways that Silver wants us to—sometimes as part of their political project (everything from using ChatGPT to write copy for flyers to making Anarchist GPT (“I’ve read The Anarchist’s Cookbook”). Now the left is, as stated above, a rather broad designation (DeepSeek censoring information on Tienanmen Square and a self-professed anarchist posting Anarchist GPT could possibly both call themselves “Left” but those are, to put it mildly, quite distinct uses of even one AI technology, LLMs.)

So where is the left on AI?

But this doesn’t begin to approach the variety of leftist thinking and grappling with AI. Even just considering left to be “left of Liberal” rather than the broader “left of center”:

  • Paris Marx, for instance, takes very seriously the potential for AI to remake our economy, but his response to that, like any good Luddite, is to look for political solutions outside of AI, that undermine its material foundations and cultural power.
  • If you cringed at my positive use of Luddite there, you might be interested in Blood in the Machine, by Brian Merchant, both a blog and a book that just might give you a new appreciation for the sophistication and prescience of the Luddites’ analysis.
  • Rob Horning writes deeply and thoughtfully about the cultural distortions not only of LLMs and image generators but also algorithmic recommendation and any number of technological systems that might’ve been called AI years ago, but don’t now.
  • Dave Karpf, whose review of On The Edge I recommend (unedited on Bluesky or edited in Foreign Policy) has been writing materialist, historically informed critiques of technology for quite some time.
  • Last but emphatically not least, no recommendation I could give to read LM Sacasas, especially his recommendations here, would be strong enough:

If you were to ask me something like “What’s the most urgent task before us?” or “What counsel do you have to offer in this cultural moment?” I would say this:

Resist the enclosure of the human psyche.

Many of these folks are on Substack! So maybe I’m wrong, and this isn’t a problem of platform level sorting so much as Silver reading the wrong Substackers (or even the most hipster takes from the right Substackers) and the actual political response that the left proposes to AI being systematically excluded from the conversation.

A common thread in many of these analyses is that AI is continuous with other capitalist attempts to alienate us from our labor by rearranging the conditions of work. They see the Nigerian clickworker performing RLHF and the teacher laid off because her boss (wrongly) believes machines can do it better as continuous with each other, and continuous also with both the smith made redundant by a 17th-century automation and the machinist who runs it.

Many see this as something to be resisted everywhere and anywhere it’s found. Consumer-focused chatbots use our responses for training, and aim to train us to rely on them, the detrimental effects of which countless teaching professionals have already seen. This makes it hard for anyone who genuinely believes in fighting this to use these tools in good faith. And it’s true, this probably does end up meaning that they’re less familiar with the capabilities and uses of the models as they exist today. But over-familiarity can be a problem too. Remember, these are tools that are built by to imitating being a helpful, intelligent assistant chatting with you — would it really be that surprising that people who use them a lot might find them more helpful and intelligent than they actually are? (To the extent that this is even measurable).

I don’t think it’s particularly helpful to refuse to engage with the actual capabilities of AI. But a truly pluralist discussion on our AI future would understand that much of this discussion is at least somewhat dependent on matters outside of the capabilities of the models themselves.

As Vox’s Kelsey Piper, hardly a Hipster Skeptic and probably not a Villager at all, wrote last week:

Mass automation has happened before, at the start of the Industrial Revolution, and some people sincerely expect that in the long run it’ll be a good thing for society. (My take: that really, really depends on whether we have a plan to maintain democratic accountability and adequate oversight, and to share the benefits of the alarming new sci-fi world. Right now, we absolutely don’t have that, so I’m not cheering the prospect of being automated.)

Note that Piper isn’t saying “Don’t deploy AI until everyone understands what the models are doing,” she’s saying we need work happening outside the models to prepare for them. That’s not a conversation that only AI power-users can participate in. As the writers of AI Snake Oil remind us, “AI safety is not a model property”

Pluralism beyond model properties

For many on the left, democratic accountability and benefit sharing can—must—be achieved through the same things they’ve been asking for for years. Communal control of the means of production (whether that means worker-, state-, co-op-like ownership, or even just state regulation), redistribution of wealth, etc. Making that possible, in this view, would require puncturing both the material power of the AI oligarchs and the cultural power that their aura of inevitability gives them. (This even explains the hipsters that Silver finds so frustrating—shitposting against the telos of AI can be framed as a necessary part of this fight, especially if you’re defending essential cultural production that AI is telling us should be automated). Silver may not like this, he may think it sounds like the same old tired critiques the left has been trotting out for centuries. The left would agree—but for them, that’s evidence that they’re right. Correct. Whatever.

In short, the kind of AI pluralism Silver is calling for requires not only that the Left not reduce itself to hipster skepticism of AI, but also that all critiques of the cultural and economic transformations that AI heralds, and potential political responses—including those from the left—remain on the table.

The Recipe

Baked Tofu With Peanut Sauce and Coconut-Lime Rice

For new readers: I include a vegan recipe on every post. You can browse past posts by recipe here.

Recipe photo

This recipe, from Yewande Komolafe at the NY Times, is a bit of a liar—it takes far more than 25 minutes. But it’s well worth it, a consistent crowd-pleaser.


This is Some Preliminary Thoughts, Bennett McIntosh’s blog. You can sign up for updates via email or rss, or unsubscribe here.

  1. Out last year from Penguin. I recommend this review by David Karpf on Bluesky or in Foreign Policy 

  2. Substack says they don’t want to moderate content, and that they’re a platform, not a publisher, but they do, or at least did, some curation in the form of inviting a select group of writers to write for a guaranteed income when they were starting up. Every so-called platform moderates content, what we’re haggling about is how much. You can decide how you feel about that. I’m posting this on Substack, but the full post is elsewhere, so take from that what you will. 

  3. The world-changing potential of AI, the need to keep that potential under the control of market democracies…