Howard A. Doughty
In 1952, my third-grade teacher took a ruler and whacked a youngster on the hand for bootlegging a ball-point pen into class. The demonic instrument would, she cried, lead to poor penmanship and general indolence. Attitudes quickly changed. In the fourth grade, the school softened. I delighted in the new technology, bade farewell to steel-nibbed pens and ink wells, and felt neither regret nor nostalgia — then. I didn’t imagine that I was witnessing an early step on the path to the extinction of “cursive” writing. Fast-forward 70 years and information technology remains controversial — only more so.
Socrates probably started it. Philosophy, he suggested, is an activity, not an archive. It occurs in the moment and shouldn’t be documented, stored, retrieved, and re-interrogated — especially when misheard, misremembered, misreported, misinterpreted, and mistranslated over millennia. He anticipated Marshall McLuhan’s dictum, “the medium is the message.” For Socrates, the message of philosophy lay in the medium of the contemporaneous human voice in conversation, not the arid, scholastic study of a transcription scribbled as the sound of his words faded from the Agora and reclaimed today from a cheap paperback or an errant Wikipedia entry. Socrates scorned (or was at least skeptical of) the written word. We know this because Plato wrote it down.
That, at least, is my idiosyncratic understanding, for which I thank Plato — even if I can’t decide whether a) Plato was Socrates’ meticulous stenographer, b) Socrates was ventriloquist Plato’s dummy, or c) a provisional truth lies between.
Humanity’s oral traditions were followed by our written traditions, and now by non-traditional, technologically mediated, electronic, digital, and artificially intelligent communication. Each stage abandoned aspects of the past, constructed its present, and foreshadowed the future. Today’s communications technology makes moving information easier and faster, but it does so at a cost. It sacrifices depth for breadth and substitutes surface efficiency for substantive quality.
Critics complain that the written word degraded organic memory and oratory. Later, the cinema eroded imagination, email debased the art of correspondence, texting devalued spelling and grammar, and social media discredited literacy and propriety. Virtual, enhanced, and alternative realities seem poised to transform us into a post-/trans-human condition wherein we aren’t even “we” anymore.
Likewise, online chatter subverts civility, fragments conversation, isolates the vulnerable, and, according to popular social psychologist Jonathan Haidt, is “sending teen mental health plummeting.” What’s next?
Specific issues are easy to identify. Many revolve around the current crise du jour among students, teachers, poets, artists, journalists, bureaucrats, doctors, lawyers, and policy makers — namely, ChatGPT and ever-emerging “generative artificial intelligence” systems which, according to their designers, don’t merely identify, classify, store, retrieve, and recombine data, but also “create brand new content” in the form of texts, images, and even computer code.
ChatGPT and its successors are arguably the shiniest new things in decades. They are causing fits among my colleagues in higher education. Faculty are frantic about students pressing a few keys and receiving a serviceable custom-made essay — on any topic, of any length — in the increasingly disdained and demoralized Humanities and Social Sciences without paying professional ghostwriters or leeching off their parents, older siblings, or their more entrepreneurial classmates. What, my colleagues fret, about academic integrity?
It gets worse. ChatGPT puts all people whose livelihood involves artistic, creative, and/or marginally intellectual creativity in peril (think fake Drake). Moreover, if an app can replace rappers, interior designers, advice columnists, and landscape painters, what about Supreme Court judges whose decisions once reflected legal training and judgement? Apps to create binding legal opinions for the signatures of “judicial activists” or “constitutional originalists” (AI doesn’t care which) are imminent.
Not imminent is the possibility that AI can care. AI, being “artificial,” has no emotions, beliefs, or morals. It has no interests. It is soulless and shameless. It’s no more invested in the truth of its output than is a bullet in the life of the person whose heart it just penetrated. Being wholly sans consciousness (much less conscience), it is not “intelligent.”
This isn’t dewy-eyed romanticism. As Hubert L. Dreyfus argued over fifty years ago (and what no one has yet successfully refuted), computers must have bodies in order to be intelligent. All computers can do is pass the infamous Turing test (admittedly no small matter). The Turing test was created as an “imitation game” — not to determine whether “machines can think,” but to determine whether machines can mimic thinking and produce results that cannot reliably be excluded as the product of human thought. Doing it and faking it well aren’t identical projects.
Even AI’s greatest advocates worry that it’s developing too quickly. The Economist, for example, quotes one London-based tech boss as being “incredibly nervous about the existential threat of AI.” The problem is not that these devices are smarter than their human creators or may one day “take over.” It’s that they lack the capacity for agency. They are literally thoughtless. AI treats language “in a statistical, rather than a grammatical, way. It is much more like an abacus than it is like a mind.”
Whether we understand it or not, that many of us are either thrilled or horrified by generative AI mainly shows the dominance of what Meredith Broussard labels technochauvenism, the belief that computers are better problem solvers than people. Some eagerly anticipate a digital Utopia wherein logic and mathematics replace passion and ideology in identifying and solving problems, from climate change to supply chains, automobile accidents, and our best prospective dates/mates. Others criticize AI’s deeply embedded, calibrated “biases” and fear the inhumanity of a totally machine-administered society. Both, however, largely accept its inevitability.
Governance by algorithm may still be a few authoritarian steps away from our admittedly imperfect political system. So, it’s important to understand why social problems are inherently unsusceptible to technosolutions. Caught between Elon’s Musketeers and romantic nostalgics, we must press pause. Information technology has undeniably enabled scientific theorizing and research in domains as diverse as medicine, metaphysics, and metallurgy. It has also provided sites for scams. Sometimes it produces real harm and must be held accountable.
AI is altering work and play. It amplifies industrialism and promises ever more alienating forms of digital Taylorism in greater dystopias than Kurt Vonnegut conjured in his debut novel, Player Piano. The prevailing stink of catastrophism arising from the current polycrisis (converging and mutually reinforcing ecological, economic, educational, epidemiological, epistemic, and ethical dilemmas) leads to a pessimism that is no longer an affordable luxury. I therefore choose to think that the technological imperative is not yet a full-blown teleology. We may be unable to reverse or to stop it, but can we shape and control it?
Where to start? We build human interests into every tool. Every device implies the social relations that fit its usage. Our capacity to choose is limited mainly by our ignorance of those implications. We must learn to flip technological determinism. We can reverse Marx’s aphorism that “Men make their own history, but they do not make it as they please.” We can say that although we cannot choose our social conditions, we can make our own history within them.
We must rediscover and recover what we are in order to remain recognizably ourselves. And that means listening to a multitude of (post-)modern post-Socratics — who must not, like their namesake, want to float in the clouds, ban music and poetry, or, worse, hand them over to ChatGPT, while the rest of us sit addled before our screens unable to remember who we were.
“The Turing test was created as an “imitation game” — not to determine whether “machines can think,” but to determine whether machines can mimic thinking and produce results that cannot reliably be excluded as the product of human thought”
Dreyfus aside, this is a contentious remark. I won’t rehearse the literature on the Turing test, but it might be interesting to note this. Note to those who have only seen the movie of the quoted title; as something of a Turing scholar, the trailer I saw so outraged me as to historical inaccuracy I have yet to see it so admittedly I may be also missing something. 🙂 For those who are interested, start with A. Hodges wonderful biography.
Note that the slip from “need body” to “therefore the TT is a sham” is fallacious; what’s the connection? (I think Turing himself would have said one needs a body to pass the test!) Also, Dreyfus seems to think that one has to *program* computers to do anything; the whole point of ANNs (and other nonpropositional approaches) is that they are not! The host for the virtual machine that is learned is, but that’s a cost saving – imagine doing it all in hardware! There are some hints on how to do the latter, BTW, in the little book (it was the author’s MSc thesis) _Turing’s Connectionism_.
I thank Mr. Douglas for his remarks.
Just a couple of minor reactions/cautions:
1. I did see the both the trailer and I saw the movie as well – which, I can assure you, was even more troubling. I was likewise appalled for pretty much the same reasons that you cite, but I did not rely on the film for my concerns about AI; on the other hand, I have never confused motion picture entertainment with actual history or legitimate documentary films;
2. No one (certainly not I) said “need body … therefore the TT is a sham”; the point is that being able to mimic “intelligence” and “thinking” and so on is just not what some people seem to think it is … it is a simulation, a simulacrum, a clever computational “as if” (which is not nothing, and can be fabulously helpful in prompting all sorts of scientific research … especially in medicine);
3. As for “Dreyfus aside,” I don’t think the essence of his remarks can be so easily finessed. Alas, Dreyfus passed away a little over six years ago and cannot speak for himself.
I enjoyed Howard Doughty’s article. I wonder if ChatGPT, like most other new technology, will reach a saturation point where the damage it does far outweighs the good. Once the pain is felt by the public, ChatGPT might be defeated, and we all go back to writing with pen nibs dipped in an inkwell.
Although I do not shrink from the label “Luddite,” I also have no desire to return to nibbed pens and inkwells … nor do I suspect that ChatGPT and its successors will be “defeated.” Rare are those technological innovations that enjoy great popular/commercial success wholly reversed (though they may be redirected and/or transformed into something even more creative/disruptive).
Perhaps we will see the spark of “consciousness” come when computers pass the “Bateson Test.” My old mentor, Gregory Bateson (1904-1980) once imagined a man asking a computer: “Do you compute that you will ever think like a human being?” To which the computer responded: “That reminds me of a story …” [Gregory Bateson. (1979). “Mind and Nature: A Necessary Unity.” New York: E. P. Dutton, p. 13.
Of course, the Bateson Test is no longer any good, because AI can simply search the pertinent literature and recycle the “correct” answer.
Incidentally, Arthur Kroker has made some useful comments on such matters. See, for example, Arthur & Marilouise Kroker. (1996). “Hacking the Future: Stories for the Flesh-Eating 90s.” Montreal: New World Perspectives, 1996.
Or, Max Weber, when in 1904, he had already summed up the 20th century as one of “mechanized petrification, embellished with a sort of convulsive self-importance … ‘specialists without spirit, sensualists without heart, this nullity imagines it has attained a level of civilization never before achieved.'” (“The Protestant Ethic and the Spirit of Capitalism.” New York: Scribners, p. 181.
I digress …