Howard A. Doughty
In 1952, my third-grade teacher took a ruler and whacked a youngster on the hand for bootlegging a ball-point pen into class. The demonic instrument would, she cried, lead to poor penmanship and general indolence. Attitudes quickly changed. In the fourth grade, the school softened. I delighted in the new technology, bade farewell to steel-nibbed pens and ink wells, and felt neither regret nor nostalgia — then. I didn’t imagine that I was witnessing an early step on the path to the extinction of “cursive” writing. Fast-forward 70 years and information technology remains controversial — only more so.
Socrates probably started it. Philosophy, he suggested, is an activity, not an archive. It occurs in the moment and shouldn’t be documented, stored, retrieved, and re-interrogated — especially when misheard, misremembered, misreported, misinterpreted, and mistranslated over millennia. He anticipated Marshall McLuhan’s dictum, “the medium is the message.” For Socrates, the message of philosophy lay in the medium of the contemporaneous human voice in conversation, not the arid, scholastic study of a transcription scribbled as the sound of his words faded from the Agora and reclaimed today from a cheap paperback or an errant Wikipedia entry. Socrates scorned (or was at least skeptical of) the written word. We know this because Plato wrote it down.
That, at least, is my idiosyncratic understanding, for which I thank Plato — even if I can’t decide whether a) Plato was Socrates’ meticulous stenographer, b) Socrates was ventriloquist Plato’s dummy, or c) a provisional truth lies between.
Humanity’s oral traditions were followed by our written traditions, and now by non-traditional, technologically mediated, electronic, digital, and artificially intelligent communication. Each stage abandoned aspects of the past, constructed its present, and foreshadowed the future. Today’s communications technology makes moving information easier and faster, but it does so at a cost. It sacrifices depth for breadth and substitutes surface efficiency for substantive quality.
Critics complain that the written word degraded organic memory and oratory. Later, the cinema eroded imagination, email debased the art of correspondence, texting devalued spelling and grammar, and social media discredited literacy and propriety. Virtual, enhanced, and alternative realities seem poised to transform us into a post-/trans-human condition wherein we aren’t even “we” anymore.
Likewise, online chatter subverts civility, fragments conversation, isolates the vulnerable, and, according to popular social psychologist Jonathan Haidt, is “sending teen mental health plummeting.” What’s next?
Specific issues are easy to identify. Many revolve around the current crise du jour among students, teachers, poets, artists, journalists, bureaucrats, doctors, lawyers, and policy makers — namely, ChatGPT and ever-emerging “generative artificial intelligence” systems which, according to their designers, don’t merely identify, classify, store, retrieve, and recombine data, but also “create brand new content” in the form of texts, images, and even computer code.
ChatGPT and its successors are arguably the shiniest new things in decades. They are causing fits among my colleagues in higher education. Faculty are frantic about students pressing a few keys and receiving a serviceable custom-made essay — on any topic, of any length — in the increasingly disdained and demoralized Humanities and Social Sciences without paying professional ghostwriters or leeching off their parents, older siblings, or their more entrepreneurial classmates. What, my colleagues fret, about academic integrity?
It gets worse. ChatGPT puts all people whose livelihood involves artistic, creative, and/or marginally intellectual creativity in peril (think fake Drake). Moreover, if an app can replace rappers, interior designers, advice columnists, and landscape painters, what about Supreme Court judges whose decisions once reflected legal training and judgement? Apps to create binding legal opinions for the signatures of “judicial activists” or “constitutional originalists” (AI doesn’t care which) are imminent.
Not imminent is the possibility that AI can care. AI, being “artificial,” has no emotions, beliefs, or morals. It has no interests. It is soulless and shameless. It’s no more invested in the truth of its output than is a bullet in the life of the person whose heart it just penetrated. Being wholly sans consciousness (much less conscience), it is not “intelligent.”
This isn’t dewy-eyed romanticism. As Hubert L. Dreyfus argued over fifty years ago (and what no one has yet successfully refuted), computers must have bodies in order to be intelligent. All computers can do is pass the infamous Turing test (admittedly no small matter). The Turing test was created as an “imitation game” — not to determine whether “machines can think,” but to determine whether machines can mimic thinking and produce results that cannot reliably be excluded as the product of human thought. Doing it and faking it well aren’t identical projects.
Even AI’s greatest advocates worry that it’s developing too quickly. The Economist, for example, quotes one London-based tech boss as being “incredibly nervous about the existential threat of AI.” The problem is not that these devices are smarter than their human creators or may one day “take over.” It’s that they lack the capacity for agency. They are literally thoughtless. AI treats language “in a statistical, rather than a grammatical, way. It is much more like an abacus than it is like a mind.”
Whether we understand it or not, that many of us are either thrilled or horrified by generative AI mainly shows the dominance of what Meredith Broussard labels technochauvenism, the belief that computers are better problem solvers than people. Some eagerly anticipate a digital Utopia wherein logic and mathematics replace passion and ideology in identifying and solving problems, from climate change to supply chains, automobile accidents, and our best prospective dates/mates. Others criticize AI’s deeply embedded, calibrated “biases” and fear the inhumanity of a totally machine-administered society. Both, however, largely accept its inevitability.
Governance by algorithm may still be a few authoritarian steps away from our admittedly imperfect political system. So, it’s important to understand why social problems are inherently unsusceptible to technosolutions. Caught between Elon’s Musketeers and romantic nostalgics, we must press pause. Information technology has undeniably enabled scientific theorizing and research in domains as diverse as medicine, metaphysics, and metallurgy. It has also provided sites for scams. Sometimes it produces real harm and must be held accountable.
AI is altering work and play. It amplifies industrialism and promises ever more alienating forms of digital Taylorism in greater dystopias than Kurt Vonnegut conjured in his debut novel, Player Piano. The prevailing stink of catastrophism arising from the current polycrisis (converging and mutually reinforcing ecological, economic, educational, epidemiological, epistemic, and ethical dilemmas) leads to a pessimism that is no longer an affordable luxury. I therefore choose to think that the technological imperative is not yet a full-blown teleology. We may be unable to reverse or to stop it, but can we shape and control it?
Where to start? We build human interests into every tool. Every device implies the social relations that fit its usage. Our capacity to choose is limited mainly by our ignorance of those implications. We must learn to flip technological determinism. We can reverse Marx’s aphorism that “Men make their own history, but they do not make it as they please.” We can say that although we cannot choose our social conditions, we can make our own history within them.
We must rediscover and recover what we are in order to remain recognizably ourselves. And that means listening to a multitude of (post-)modern post-Socratics — who must not, like their namesake, want to float in the clouds, ban music and poetry, or, worse, hand them over to ChatGPT, while the rest of us sit addled before our screens unable to remember who we were.