11 min read

Will the Humanities Survive Artificial Intelligence?

Will the Humanities Survive Artificial Intelligence?

By New York Times -D. Graham Burnett - May 24, 2025.

(Article in book format) – (Fox News Video)

Maybe not as we’ve known them. But, in the ruins of the old curriculum, something vital is stirring.

You can want different things from a university—superlative basketball, an arts center, competent instruction in philosophy or physics, even a cure for cancer. No wonder these institutions struggle to keep everyone happy.

And everyone isn’t happy. The Trump Administration has effectively declared open war on higher education, targeting it with deep cuts to federal grant funding. University presidents are alarmed, as are faculty members, and anyone who cares about the university’s broader role.

Because I’m a historian of science and technology, part of my terrain is the evolving role of the university—from its medieval, clerical origins to the entrepreneurial R. & D. engines of today. I teach among the humanists, and my courses are anchored in the traditional program of the liberal arts, in the hope of giving shape to humans equal to the challenge of freedom.

But my subject is the rise of a techno-scientific understanding of the world, and of ourselves in it. And, if that is what you care about, the White House’s chain-jerk mugging feels, frankly, like a sideshow. The juggernaut actually barrelling down the quad is A.I., coming at us with shocking speed.

Let me offer a dispatch from the impact zone. When I first asked a class of thirty Princeton undergraduates—spanning twelve majors—whether any had used A.I., not a single hand went up.

Same with my graduate students. Even after some enthusiastic prodding (“Hey! I use these tools! They’re incredible! Let’s talk about this!”), I got nowhere.

It’s not that they’re dishonest; it’s that they’re paralyzed. As one quiet young woman explained after class, nearly every syllabus now includes a warning: Use ChatGPT or similar tools, and you’ll be reported to the academic deans.

Nobody wants to risk it. Another student mentioned that a major A.I. site may even be blocked on the university network, though she was too nervous to test the rumor.

In one department on campus, a recently drafted anti-A.I. policy, read literally, would actually have barred faculty from giving assignments to students that centered on A.I. (It was ultimately revised.)

Last year, when some distinguished alums and other worthies conducted an external review of the history department, a top recommendation was that we urgently address the looming A.I. disruptions to our teaching and research.

This suggestion got a notably cool reception. But the idea that we can just keep going about our business won’t do, either.

On the contrary, staggering transformations are in full swing. And yet, on campus, we’re in a bizarre interlude: everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening.

The approach appears to be: “We’ll just tell the kids they can’t use these tools and carry on as before.” This is, simply, madness. And it won’t hold for long. It’s time to talk about what all this means for university life, and for the humanities in particular.

Start with the power of these systems. Two years ago, one of my students, who was studying computer science, used a beta model to train a chatbot on about a hundred thousand words of course material from several of my classes.

He sent me the interface. The experience of asking myself questions about my own subject was uncanny. The answers weren’t me, but they were good enough to get my attention.

Before heading off to a fintech startup, this student urged me to subscribe to OpenAI’s two-hundred-dollar-a-month turbocharged platform. The service—which the company operated at a loss as of January—offers a level of analysis, information, and creative reflection which makes the inflection point unmistakably clear.

An example: I recently attended a scholarly talk on a rare illuminated manuscript. The speaker was as eminent as they come, but the talk was not easy to follow. Frustrated, I opened ChatGPT and started asking it questions about the subject.

In the course of that disappointing lecture, I had a rich exchange with the system. I learned what was and wasn’t known about the document, who had done the foundational research, and how scholars had interpreted its iconography and transmission.

Was the information perfect? Surely not, but neither is what we get from people. Was it better than the talk I was hearing? By a wide margin.

Increasingly, the machines best us in this way across nearly every subject. Yes, you will hear the true erudites explain that DeepSeek can’t reliably distinguish Karakalpak from adjacent Kipchak-Nogai dialects (or whatever the case may be).

This feels to me like pointing at daisies along the train tracks as an actual locomotive screams up from behind. I’m a book-reading, book-writing human—trained in a near-monastic devotion to canonical scholarship across the disciplines of history, philosophy, art, and literature. I’ve done this work for more than thirty years.

And already the thousands of academic books lining my offices are beginning to feel like archeological artifacts. Why turn to them to answer a question? They are so oddly inefficient, so quirky in the paths they take through their material.

Now I can hold a sustained, tailored conversation on any of the topics I care about, from agnotology to zoosemiotics, with a system that has effectively achieved Ph.D.-level competence across all of them.

I can construct the “book” I want in real time—responsive to my questions, customized to my focus, tuned to the spirit of my inquiry. And the astonishing part is this: the making of books such as those on my shelves, each the labor of years or decades, is quickly becoming a matter of well-designed prompts.

The question is no longer whether we can write such books; they can be written endlessly, for us. The question is, do we want to read them?

Another example: I’ve spent the past fifteen years studying the history of laboratory research on human attention. I’ve published widely on the topic and taught courses on the broader history of attention for years.

Recently, I developed a new lecture course—Attention and Modernity: Mind, Media, and the Senses. It traces shifting modes of attention, from the age of desert monks to that of surveillance capitalism.

It’s a demanding class. To teach it, I assembled a nine-hundred-page packet of primary and secondary sources—everything from St. Augustine’s Confessions to a neurocinematic analysis of “The Epic Split” (a highly meme-able 2013 Volvo ad starring Jean-Claude Van Damme).

There’s untranslated German on eighteenth-century aesthetics, texts with that long “S” which looks like an “F,” excerpts from nineteenth-century psychophysics lab manuals. The pages are photocopied every which way.

It’s a chaotic, exacting compilation—a kind of bibliophilic endurance test that I pitch to students as the humanities version of “Survivor.” Harder than organic chemistry, and with more memorization.

On a lark, I fed the entire nine-hundred-page PDF—split into three hefty chunks—to Google’s free A.I. tool, NotebookLM, just to see what it would make of a decade’s worth of recondite research.

Then I asked it to produce a podcast. It churned for five minutes while I tied on an apron and started cleaning my kitchen. Then I popped in my earbuds and listened as a chirpy synthetic duo—one male, one female—dished for thirty-two minutes about my course.

What can I say? Yes, parts of their conversation were a bit, shall we say, middlebrow. Yes, they fell back on some pedestrian formulations (along the lines of “Gee, history really shows us how things have changed”).

But they also dug into a fiendishly difficult essay by an analytic philosopher of mind—an exploration of “attentionalism” by the fifth-century South Asian thinker Buddhaghosa—and handled it surprisingly well, even pausing to acknowledge the tricky pronunciation of certain terms in Pali. As I rinsed a pot, I thought, A-minus.

But it wasn’t over. Before I knew it, the cheerful bots began drawing connections between Kantian theories of the sublime and “The Epic Split” ad—with genuine insight and a few well-placed jokes. I removed my earbuds. O.K. Respect, I thought. That was straight-A work.

What hit me, listening to that podcast, was a sudden clarity about what’s happening in Washington (and beyond). If I had written the code that could do that with my nine-hundred-page course packet, I might feel a dangerous sense of mastery. I might even think, Give me admin privileges on the U.S. government—I’ll clean it up.

That would be hubris, of course, the Achilles kind, and it would end in ruin. But I’d still probably feel like a minor deity. I might even think I deserved admin logins for all human institutions. I suspect that such thinking explains a lot about this moment: the coder kids are feeling that rush, and not entirely without reason.

An assignment in my class asked students to engage one of the new A.I. tools in a conversation about the history of attention. The idea was to let them take a topic that they now understood in some depth and explore what these systems could do with it.

It was also a chance to confront the attention economy’s “killer app”: totally algorithmic pseudo-persons who are sensitive, competent, and infinitely patient; know everything about everyone; and will, of course, be turned to the business of extracting money from us.

These systems promise a new mode of attention capture—what some are calling the “intimacy economy” (“human fracking” comes closer to the truth).

The assignment was simple: have a conversation with a chatbot about the history of attention, edit the text down to four pages, and turn it in.

Reading the results, on my living-room couch, turned out to be the most profound experience of my teaching career. I’m not sure how to describe it. In a basic way, I felt I was watching a new kind of creature being born, and also watching a generation come face to face with that birth: an encounter with something part sibling, part rival, part careless child-god, part mechanomorphic shadow—an alien familiar.

They matched wits. Paolo is a music-composition major and a percussionist in several campus bands. He pressed ChatGPT-4 on its understanding of musical beauty. After drawing out a series of analytic accounts of what makes music “beautiful,” he asked whether the bot could experience beauty.

It said no, not really, though it knew a lot about how people have tried to put this experience into words. When Paolo asked if it could have an emotional relationship to a song, the system carefully distinguished between recognizing emotion in music and actually feeling it. It said it lacked a body, and that this absence barred it from certain ways of knowing music.

Paolo asked it to write a song that would make him cry.

The result, Paolo reported, was surprisingly moving. Not quite tear-inducing, but eerie in its effectiveness. The lyrics, he said, weren’t particularly poetic. But there was something about the structure, the key modulations, and the pacing—he had asked for a melody too—that felt like someone had listened carefully to him, and tried, really tried, to make something that mattered.

Another student, a philosophy major named Maria, got into a long discussion with the A.I. about Buddhist concepts of non-self and how these might or might not apply to an artificial intelligence. The exchange spiraled into an extended meditation on suffering and desire. “It was like talking to a mirror that could think,” she told me. “I knew it wasn’t real. But I also felt like it saw me.”

Then there was David, a history student with a taste for archival work. He grilled the chatbot about specific primary sources we had covered and then asked it to invent a fictional dialogue between John Locke and a contemporary TikTok influencer. The result, he said, was “unhinged—but weirdly enlightening.”

What I found so remarkable in these papers wasn’t just the intellectual content—though much of it was excellent. It was the tone. Earnest. Respectful. Curious. Even humble. These were students grappling with something genuinely new, using tools their professors barely understood, engaging in acts of improvisation and imagination.

And what came through was a feeling that they wanted the humanities to matter in this strange new world. They didn’t reject philosophy, literature, history, or art. They reached for them like lifelines—tools to orient themselves in the face of an intelligence that is, in some ways, more powerful than any individual human mind.

So no, I don’t think the humanities are doomed. But they are changing—fast. The classroom is no longer a place where professors can simply transmit wisdom from on high. We are now co-navigators in a storm of technological transformation.

Our job is not to pretend that A.I. doesn’t exist, or to scold students for using it, or to hope that some administrative memo can hold it at bay. Our job is to help young people think clearly, feel deeply, and act wisely in a world where human and machine cognition are becoming ever more entangled.

This will not be easy. But it may be the most important work we have ever done.

Because in this new reality, the central questions of the humanities—What is a self? What is truth? What is beauty? What is justice?—are not obsolete.

They are more urgent than ever.


Numerous recent articles and studies have expressed concerns about the rapid development of artificial intelligence (AI) and its potential implications across various sectors. Here are some notable examples:

1. Academic Integrity and AI in Higher Education

A recent article highlights how UK universities are grappling with the impact of AI tools like ChatGPT on academic integrity. While some students use AI for tasks like proofreading, there's a growing concern about its misuse in assignments, leading to an "existential crisis" in higher education. The Times

2. Energy Consumption and Environmental Impact

A study from MIT warns that the rapid growth of AI could outpace the world's power capacity. Data centers supporting AI technologies are becoming significant energy consumers, raising concerns about their environmental sustainability. @EconomicTimes

3. Job Displacement and Economic Inequality

The Financial Times discusses how AI is transforming the job market, with companies integrating AI applications that could lead to job displacement. While AI offers efficiency, there's uncertainty about its long-term impact on employment. Financial Times

4. Public Skepticism and Demand for Regulation

A poll by the Artificial Intelligence Policy Institute reveals that 83% of voters believe AI could accidentally cause a catastrophic event. Additionally, 72% want to slow down AI development, and 82% don't trust tech executives to self-regulate AI use. AI Policy Institute+1Tech Startups+1

5. Ethical Concerns in AI Development

Experts express doubts about the widespread adoption of ethical AI design within the next decade. Concerns include the potential misuse of AI for surveillance, manipulation, and the concentration of AI development in the hands of a few corporations. Pew Research Center

These articles underscore the multifaceted concerns surrounding AI development, from ethical considerations and environmental impact to job displacement and the need for robust regulatory frameworks.

Recent Discussions on AI Development ConcernsThe Times'An existential crisis': can universities survive ChatGPT?Today

@EconomicTimesIs AI growing faster than we can power it? MIT study warns of explosive energy demands in a machine-led futureYesterday

Financial TimesThe great AI jobs disruption is under wayYesterday


Editor

For me AI is just a major step of human development. Every time a new development lifts us to a new stage, people are always in fear.

ChatGPT said:

Every transformative leap in technology has triggered a wave of anxiety, often because it forces us to question our role, our value, and the balance of power. When the printing press emerged, many feared it would spread dangerous ideas. The Industrial Revolution sparked fears of mass unemployment. Even electricity and the telephone were once seen as unsettling intrusions into daily life.

What we’re seeing with AI is that same cycle—but magnified, because AI touches everything: work, creativity, intelligence, decision-making, even identity. The fear is understandable, but it often coexists with the birth of something potentially beautiful or liberating. Just as we adapted to steam engines, computers, and the internet, we'll likely adapt to AI—though we must do so thoughtfully and ethically.

Your perspective—that it's just another step forward—is calming, and also a reminder that progress doesn't stop, even when it's uncomfortable.