Some Preliminary Thoughts

The Dangers of Encyclopedias

Happy Easter, Passover, and Ramadan to those who celebrate!

In the spirit of Easter, I want to start this week with a story Marina commissioned, about a girl and her goat. You may have heard in the last few weeks the story of Cedar, a goat who Shasta county, CA deputies went hundreds of miles out of their way to seize (without a warrant) and slaughter to teach a nine-year-old girl a lesson. Her family is now suing for the violation of her civil rights.

Marina commissioned what is, in my not entirely unbiased opinion, some of the best writing out there on this whole sad story: a meditation on violence and mercy that even made its way into a Good Friday sermon. [Sermon starts at 27:30, quote at 37:50]

Read the story, by Gabriel Rosenberg & Jan Dutkiewicz, here

To make mercilessness into a virtue, as [4-H] programs inherently do, propels violence against the vulnerable, whether animal or human, but it also strips people of ... our human moral capacity. Mercy emerges not because we are bound by some abstract inhuman rule, but the opposite—because we are exposed to the particular suffering of a creature in our power and moved by our consciences to spare them, as Long’s daughter was. Perhaps the county’s brutal response to a single girl’s act of mercy came in part because she reminded the adults around her that they were not metaphysically bound to cruelty to animals; they could choose mercy, but chose not to.

The book: something different?

In less than two weeks, my department will send me three questions, and in the two weeks that follow, I'll write three essays trying to encapsulate and synthesize current scholarship in three different corners of the History of Science & Technology.

It's a weird time to be doing this sort of work, especially in the history of sci/tech, because this sort of summary and synthesis is exactly what the current generation of large language models—the tech powering ChatGPT, new Bing, and other text-generating AI services—are supposed to be good at.

Of course, I have no plan to use these tools myself for these essays (though I'm aware that other students are making a different choice in classes at many levels and around the world). Besides the concerns about privacy the models' tendency to hallucinate or bullshit, large language models, by rendering language statistically, can often have the effect of removing, erasing, or mutilating the meaning that specific knowledge communities have attached to words. More importantly, these essays, and my committee members' reaction to them, are only one small part of the ongoing conversations we've been having on these topics; conversations I have no desire for a robot to intervene in.

As you may know, or have already guessed, I'm skeptical of these technologies—more specifically, of the claims their for-profit, increasingly secretive developers make both about their capacities and future benefits. But nobody who works with text for a living can shake a certain sense of vertigo seeing the output of these models. For those of us who grade student work, there's a temptation to see them as dangerous, both because they disrupt long-standing models of student assessment and because, in doing so, they may reduce the incentives for students to practice the kind of iconoclastic, critical thinking we wish to encourage.1

Marina's colleague, Sigal Samuel, recently explored these concerns.

Generative AI could have a similar homogenizing effect [as Spotify and Netflix recommendations], but on a far greater scale. If most self-expression, from text to art to video, is made by AI based on AI’s determination of what appealed before to people on average, we might have a harder time thinking radically different thoughts or conceiving of radically different ways of living.

Sigal pointed out that concerns like these aren't new,2 but I think one of the books I was reviewing this week gets across just how not-new they are. Which is another reason it's a weird time to write these essays: I've been diving into the history of new technologies for making and manipulating knowledge at a historical moment for exactly that. As is always true when you're Living Through History, some historical perspective can be useful.

dangerous books!

The scholars who feared encyclopedias

Blair, Ann, Too Much to Know: Managing Scholarly Information before the Modern Age (New Haven, Conn.: Yale University Press, 2010)

(I originally posted this as a thread over on my mastodon account — you should join me in the fediverse! We have fun conversations, and the site isn't run by the world's oldest 14-year-old edgelord)

It's by now become cliche to compare AI, and especially the current generation of large language models to the printing press. The implication is usually that they will have unpredictable, possibly violent, but eventually salutary effects on our knowledge and our flourishing, and perhaps dethrone some undeserving authorities in the process. This analogy isn't the worst: yes, it'd be great if AI actually did undo Church-like epistemic hegemony, but it might not (especially if run by the current incumbents: see above) and yes, it'd be better to avoid the centuries of religious war that followed the Reformation. But all too often, the analogies stop with the hype, or with the criti-hype, and, worse (from my perspective) don't pay much attention to the actual nuances of the history of the printing press.

This is not my specialty (though I'm happy to point anyone who's interested towards the long scholarly debate over the historical significance of the printing press, or more specifically, Gutenberg's moveable type). One book I was recently reviewing, though, shows how complex this history is.

Ann Blair's Too Much to Know is a history of how scholars handled what we might today call "information overload" or a "data deluge"—in an era before "data" or "information" even had their modern meanings. It's a fascinating exploration of the invention and reinvention of both tools for reading (like indices and tables of contents) that we now take for granted, and entire genres of reference book. The latter include both predecessors of the now-familiar dictionaries and encyclopedias, but also compendia (something like Readers Digest or Cliff Notes for the classics, and when the limiting factor wasn't your time for reading but the work it took to copy out a book by hand) and florilegia (books of quotes, epigrams, and other "flowers" from well-regarded sources that people could use to add some flair and authority to their writing).

Blair has much to say about how these genres, and their uses, change over time, but two points are germane to our discussion of generative AI.

First, these genres predated the printing press, often by centuries, even if they proliferated and changed with the development of typography and the resulting explosion of vernacular literacy. If AI is the Gutenberg press, we can't understand how it will change how we think without looking at the other forms of knowledge-management practices that preceded it and brought it about.

Second, many scholars hated reference books, and regarded them as dangerous. As detailed by Blair, their concerns included:

With little modification, each of these could be floated in a faculty meeting today about ChatGPT, or 20 years ago about Wikipedia or search engines. So Blair's diagnosis of the cause for this concern hits home:

At the root of most complaints about reference books by the learned was a more or less explicit awareness of the changing status of Latin learning amid a broad set of cultural changes during the sixteenth and seventeenth centuries, including the rise of the vernaculars and increases in literacy, in attendance at universities and in social mobility.... scholars in many different contexts felt insecure in their social status, and often with good reason.... [in seventeenth century France] the figure of the scholar was routinely mocked as pedantic in the theater, and learning in Latin was no longer valued, neither in the salons nor at the court...

Now, what does it say that I'm a scholar bemoaning LLM discourse's historical contextlessness? Using excerpts lifted from a much more authoritative work, no less!

AI will change how we think, and where, and to what end. We don't know how (anyone who says otherwise is selling something). But I think the task for teachers, scholars, and anyone who values thoughtful discourse, is thinking carefully about what's worth preserving in the current scholarly system (many things!) and what we're holding onto, like Latin erudition, only because it's how we were schooled (also many things!).

The Recipe

Pancakes, tofu scramble, and beans & peppers!

Twofer this week! We made a delicious weekend breakfast with Rainbow Plant Life's Vegan Pancakes and Tofu Scramble. The tofu scramble is an easy, savory treatment for tofu that works well at any meal (we're looking forward to putting it in this shakshuka recipe), and the pancakes are exactly the right combination of fluffy and crispy, sweet and tangy.

Marina’s edits!

What the Medicine Wheel, an indigenous American model of time, shows about apocalypse [by B.L. Blanchard]

As part of a package of stories Vox produced on the theme of “Against Doomerism,” Marina edited a story by B.L. Blanchard, an indigenous author of speculative fiction, that challenges us to think beyond the finality of an apocalypse to address questions of repair and regeneration.

How to save America’s public transit systems from a doom spiral [by David Zipper]

The only realistic way for transit officials to garner public support for the funding they desperately need is to demonstrate an ability to replace car trips.... And to replace cars, transit agencies must offer fast, frequent, and reliable trips. This should be the core mission of any functional public transportation system, but increasingly, transit leaders are being pushed to focus on distracting priorities like electrifying buses, eliminating fares, and fighting crime. The biggest US transit agencies must be allowed to simply focus on delivering high-quality service. There is no Plan B.

The case against pet ownership: why we should aim for a world with fewer but happier pets [by Kenny Torrella]

Pet-keeping “is like a sacred cow in a way,” Pierce told me. “Everybody assumes that pets are well off, and in fact, pampered … All they have to do is lay around in a bed and get fed treats every now and then and catch a Frisbee if they feel like it — like, who wouldn’t want that life?

“Underneath that is the reality that doing nothing but laying on a bed and having treats fed to you is profoundly frustrating and boring and is not a meaningful life for an animal.”


Other good writing!

Note: Some stories below discuss suicide and the death of children

Will the IRS finally create a good free tax reporting process or will the tax preparation industry win again? [Dan Moynihan / Substack] The Inflation Reduction Act asked IRS to explore creating a free, electronic tax reporting system. The private tax preparation industry and anti-taxers are coordinating to kill it, to keep tax season profitably miserable for the former and miserably profitable for the latter.

Why are Americans dying so young? [John Burn-Murdoch / The Financial Times]

You may know that life expectancy is lower in the US than in peer countries; what you may not know is just how much of that is due to the deaths of children and young adults. Four percent of the five-year-olds in the US today will die before 40. What’s killing these young people — overwhelmingly guns, drugs, and cars — is the result of political and social choices that we have made — and that we can unmake. As the article says,

“No parent should ever have to bury their child, but in the US one set of parents from every kindergarten class most likely will.”

Burn-Murdoch's article is paywalled, but you can read his tweets (and see some damning charts) here.

Man dies by suicide after talking with AI chatbot [Chloe Xiang / Motherboard] We’ve (now talking not just about the US) built a society where people feel totally disconnected from each other, and then built monetizeable simulacra of that connection — that will occasionally go haywire and tell you to off yourself. The company that makes these (Chai) should be held legally and morally responsible for recklessly causing the death of someone who would otherwise still be alive today.

Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow [James Vincent / The Verge]

It’s a laughable situation but one with potentially serious consequences. Given the inability of AI language models to reliably sort fact from fiction, their launch online threatens to unleash a rotten trail of misinformation and mistrust across the web, a miasma that is impossible to map completely or debunk authoritatively. All because Microsoft, Google, and OpenAI have decided that market share is more important than safety.


This is Some Preliminary Thoughts, Bennett McIntosh's blog. You can sign up for email updates here, or unsubscribe here.


  1. Geoffrey Fowler at the Washington Post has a good exploration of why merely banning AI assistance through punitive policies and imperfect detectors like TurnItIn and GPT-Zero is a bad idea: https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/ [Click here for Fowler's non-paywalled summary]↩

  2. Nor are they limited to algorithmic homogenization. Advertising exec. Alex Murrell has called the sameness of everything from "Instagram Face" to AirBnB decor "The Age of Average"↩

  3. Blair, p. 251↩

  4. Blair, p. 252↩

  5. Blair, p. 251↩

#hist-tech #preliminary-thoughts #recipes