Sandra Dunham
Recently, Elon Musk expressed grave concerns about artificial intelligence (AI). He claims it could put everyone out of a job and become “the most disruptive force in history,” and that it could be more dangerous than nuclear weapons. He joined the call to pause further development of AI.
Jim Balsillie (formerly of Research in Motion) agrees that AI should be regulated. He reminds us that automobiles can kill people, but that doesn’t mean we stop using them; we just improve the regulations about how they are driven. However, he reminds us that there is also much to be gained from AI. And he suggests that Elon Musk might have an ulterior motive for raising alarm bells about AI, that Musk is the head of X (formerly Twitter) and that he might be using this issue to distract from issues related to the harm caused by social media platforms.
AI has been around for decades. However, current advances and accessibility have created dialogue. The what-ifs are creating concerns. For many average Canadians, AI is a scary prospect. And the doomsday predictions about it are terrifying. Our knowledge about the functioning and potential of AI is limited. Government is responding to these concerns with legislation that might not be carefully thought out and may risk throwing the proverbial baby out with the bathwater.
As with so many advances, we have little choice but to trust in what experts tell us. However, good critical thinkers might also ask what the motivation of these “experts” is. Are they gaslighting to distract us from other, bigger issues? In this case, is Musk creating terror so that he doesn’t have to give up his profitable algorithms that cause significant mental health issues?
There could be many other reasons for tech giants to sign on to an open letter calling for a pause in AI development. While some of the signatories may truly have concerns about AI, other reasons to sign could be as simple as not wanting to be left off the list of notable signatories or believing that caution is always the correct choice.
My limited understanding of AI does not allow me to know whether it is the biggest threat to humanity. My critical thinking skills tell me that, as with most things in life, AI will have both positive and negative impacts on humanity. I’m just happy to see at least one notable tech expert, Jim Balsillie, caution us against a knee-jerk reaction.

How do we know that this piece wasn’t written by ChatGBT?
By some measures I am a minor expert in a relevant field – my MS work was on understanding the (de)merits of non-standard models of computation, in particular, neural networks and I work in cyber security after several years in software development. You can judge my credentials yourselves; part of the problem with dealing with expertise is akin to what Plato says in the Meno about learning – if you don’t already know, how will you recognize what you found.
That introductory stuff out of the way – the reason that *I* am alarmed is because the traditional approaches to cyber security, for example, are largely to completely IMPOSSIBLE once artificial neural networks are in play. This includes even such seemingly simple matters as conventional approaches to software and systems testing. Do we really want to commit to this? Yes, a world chess champion that plays chess like no human is neat, and there are many other interesting and useful results – the trade off is *really* hard to negotiate. My Conundrums column for this news letter goes into all these matters in more detail over 3 issues.
That said, yes, some folks might have ulterior motives – but are their arguments plausible anyway? Hypocrites, liars, etc. can *still be correct*. Also, it may well be the biggest threat to humanity and also show great promise. I sometimes believe this. During the pandemic prior to the vaccine development I was very worried that the vaccines were going to have a rather unfavourable (quite effective but also quite risky). Thank goodness (sensu Dennett!) they turned out better than that. Why? Because (apparently – part of what I did during the pandemic was read up on what psychologists say about risk perception, knowing that it would be interesting for the pandemic response, professionally, and because I like reading about human thinking, etc.) risk perception is incredibly variable – as I knew informally already from my work in cyber security. Because people have different risk appetites, imposing a solution may prove very difficult. I agree with the vaccine mandates, more or less (I err on making them more required), but if they had been far more draconian (suppose they lowered transmission by 50% but also killed 1/10000 people!) would we be as imposing then? This works both ways, but given that (unlike physicians and pharmacists) computing people are not professionalized, there is not even a weak sauce equivalent of the Hippocratic Oath for us computing specialists. And so I raise the alarm (here, at work, in other ways).
AI, like any technology from the pre-historic “digging stick” to the modern earth mover … never mind cryptocurrency and algorithmic decision-making are all created with a purpose and that purpose is almost always the pursuit of control over nature or human nature, i.e., the external environment or our fellow humans.
So, strictly speaking, technology is never entirely “value-neutral.”
We would do well, then, to ask what is the purpose of AI? Or, perhaps better, what is the problem that AI is designed to solve?
We would also do well to ask what “is” AI? And what does it “do.”
If we inquiry even a single step “beneath the surface,” we may begin to understand that AI is something more than we expected in terms of efficiency and utility, but something less in terms of its potential to mimic human discourse and transform that derivative capacity into the ability to gain consciousness, to form intentions, to experience emotions, display empathy and to gain qualitative equality to our species.
I am still waiting for someone in the AI or any other “community” to satisfactorily refute the statement that formed to title of Hubert L. Dreyfuss’ article, “Why Computers Must Have Bodies in Order to Be Intelligent (Review of Metaphysics, September, 1967).
Until then, we should regard it as one of the most sophisticated technologies our species has created, but we should not imagine it to be something more (or less) than it is.