Skip to content
Centre for Inquiry Canada

Centre for Inquiry Canada

Your humanist community for scientific, skeptical, secular, and rational inquiry

  • About
    • About CFIC
    • What Is CFIC?
      • Mission, Vision, & Values
      • Centre for Inquiry Globally
      • Why We Need CFIC
      • History
    • Areas of Focus
      • Secularism
      • Scientific Skepticism
      • Critical Thinking
      • Building Community
    • Our Structure
      • Governance of CFIC
      • CFIC Bylaws
      • Branches
    • Contact
    • Privacy Statement
  • Media
    • Critical Links Newsletter
    • Podcast for Inquiry
    • Search Archives
    • Videos
    • Cost of Religion Report
  • Get Involved
    • Join Us
    • Calendar of Events
    • Find a Local Branch
      • Victoria
      • Regina
      • Saskatoon
      • Winnipeg
      • Ottawa
      • Toronto
      • Montreal
      • Halifax
      • Virtual Branch
    • Volunteer
    • Mailing List
  • Donate
    • Donate to CFIC
    • CanadaHelps
    • PayPal
    • Interac Transfer
  • Become a Member
  • Toggle search form

Keith’s Conundrums: “Super” AI

Posted on September 24, 2025September 30, 2025 By Critical Links 2 Comments on Keith’s Conundrums: “Super” AI

Keith Douglas

Recap

Last time, we discussed what could be involved in the notion of “generality” in “artificial general intelligence”. Alex wrote to say he enjoyed the article. Thank you, Alex.

This time…

…I propose to discuss the “super” in the terminology “artificial superintelligence”. Like with “general,” there are several distinct possibilities people could and likely do have in mind. I address several below.


Speed

This is the most obvious to many people, as traditional computer systems are already very fast at many items that might be regarded as requiring intelligence. One of these is arithmetic. Oddly, however, the current approaches to AGI do not expose the parts of computers responsible for arithmetic in ways that allow it to be used effectively. This was, sort of, warned of by Douglas Hofstadter many years ago, who coined the saying, “Cogito, ergo I do not have access to the level at which I sum”. He meant this both ways: we don’t have access to the way in which we are composed of primitive computational elements, and also that we don’t have access to “our being” (the “sum” in the Latin device of Descartes he’s alluding to), meaning precisely how we work – without a lot of investigation, anyhow. The suggestion of the so-called computational theory of mind, and in particular its application to artificial intelligence, is that these two uses are related.


But pure speed doesn’t result in overall improvements in skills and abilities, as we noted last time, as such. One needs some other form of organization; this is traditionally a computer program, but it can be a learned sequence of state transitions instead. I.e., an artificial neural network or the like, or one of several other models that acquire rather than have their functioning imposed. It is important to realize that this sheer speed has been true almost from the beginning of electronic computers, as well.

Task Competence

This seems plausible in some domains; traditionally programmed chess programs are more task-competent than humans. (They play in excess of grandmaster-level chess.) But note even there: humans can still from time to time beat them. It is unclear to me, at least why this is; it is, however, true in the human case – the world chess champion (who for the moment is always a human) does not win even an overwhelming majority of games against other grandmasters.

But it is very unclear what it means to claim in “all domains”. Take the intellectual skills problem we mentioned. A world-renowned chess engine would be absolutely “rubbish” (as the British say) as a hockey goaltender. It is then we get restrictions to a more narrow sense of intellect, one that does not require limbs (at least some – there’s sledge hockey for paraplegics, who are of course as intelligent as any human in general). This is mentioned often as “incidental”. To see that this is hard, imagine an artificial intelligence claimed to be super-intelligent at everything including, say, chemistry. This is where the tacit knowledge arguments against artificial intelligence apply (e.g., that of Dreyfus). While I do think one can train an artificial intelligence into existence, it appears very likely that we cannot create one from scratch, as we cannot represent the muscle movements, etc., that are needed for, say, laboratory operations. When I was at McGill, one could take some of the introductory chemistry courses without their laboratory component; it was a distinct course to register for in some cases. It always struck me as foolish that one is missing competence in chemistry by not watching reactions and encountering certain sorts of mishaps and failures. Granted, one could learn a lot about what those could be “from the book” (I have no desire to encourage synthesis of dioxygen difluoride by rampaging robots). But if the goal is to become as task competent and then beyond, it seems one has to (a) make robots in the embodied sense and (b) send them to learn in the lab, too. My father told me that his 3 years of required industrial arts in high school helped him later in his studies and work in chemistry. Note here the timescale: to learn how to do chemistry could (by my lights) only be given a 3x (say) speedup – if one assumes a machine that doesn’t sleep or eat and hence works 24 hours a day rather than 8. I leave as a question: how much of each discipline is subject to this sort of constraint?

Even a discipline like pure mathematics is subject to this – if someone like the great Euler can be believed, who claimed that some of his skill was in his pencil: there’s a kinesthetic aspect to mathematics, even! (There’s one in my professional activity as a penetration tester, too: it is often like exploring a cave or forest and results in remembering how to “look”.)


Parallelism

This is related to speedup, but is not necessarily the same thing. Designing good parallelism is a task that many software developers find awkward, despite the fact that nervous systems are (arguably) parallel systems. (Presumably this is related to Hofstadter’s point, mentioned above.) A super-intelligence might be one that can multitask, so to say, more effectively than we can. In order to determine if this is so, we need a way to individuate tasks. For example, take your heart. If we remove your nervous system, we can keep your heart beating, but regulation of its rate is lost (unless we add an artificial pacemaker). So is this (a?) task that we do in parallel? I think yes, at least one. From the other end, a large cloud provider like Microsoft’s Azure supplies large amounts of computing power to its customers (for a price, of course). Are the tasks undergone by all their customers somehow parallel to each other? One needs a notion of what counts as the same computer to deal with this. I know of no such metaphysics of computation, though I have contributed in minor ways to this almost non-existent subject. I do believe that this is the area that holds most promise. But as the warning as the design matters alluded to suggests, it is not sufficient to merely offer more processing (or anything else – networking, storage).

“Computes More Functions”

This is the least likely of the four we have addressed so far. There are notional models of computing devices that compute more functions than those computed by Turing machines (or the equivalent models). I studied the analogue neural networks proposed by Hava Siegelmann in her 1997 book. These are in my view the most plausible of an implausible bunch. Nobody has any idea how to isolate a system from noise in the way needed to avoid losing (and then some) all the putative power; nobody knows how to read off the results once available, etc. (All of this and more is in my MS thesis and the paper I wrote later for the Alan Turing Year.)


Super-programmed?

Some people think a possibility exists that the currently hyped systems will then be able to modify the program underlying the neural network (for example, the implementation of its learning mechanisms) or adopt a symbolic subsystem (like Dennett thinks we have). This to me is correct in so far as “not impossible” (like the previous one likely is), but it is still unclear how one would do it. One does need speed, for sure – think of the 4 billion years of (or more, if you count prebiotic) evolution that brought us (or even a simple single-celled intelligence) into existence. (Note that this is a different use of speed than the previous.) However, the use of “speed” is likely misleading here as evolution by natural selection is directionless. “Making more intelligent” is a goal, hence velocity, not speed, is the better understanding in my view. How does this work?


Super-learned?

It is also sometimes “piously” hoped that we can simply get better by more and more learning on the part of these systems and that somehow an appropriate sort of emergence will occur. I have suggested that some of what has to be learned is not in books as such, and even the stuff in the written word is going to be interesting to sift through. One example that should be familiar to us Canadians is multilingualism. We are often told as children that this is the section of the library in French, say. How will mere distinctions between languages be picked up on? I have bilingual volumes, too – notably some facing page editions of Greek-English and German-English texts (and a few others). I managed to learn (somewhat – my German and Greek are lousy, to say the least) how to pick up the context here (without being explicitly told – nobody said “the Loeb editions have Greek on one side, English on the other”) – how? This context switch must be done in order to have more intelligence than me when it comes to reading (say) philosophy. What is it about me that allows me to do this? And it has to be done at scale; lots of academics manage reading levels far above mine in such languages and can effortlessly switch. ChatGPT (for example) can be told when composing replies to use this or that language. However, I would encourage you to explore (i.e. analyze what is needed to find it funny) a joke I was told years ago by a friend: We were going by Metro to La Cordée, a famous Montreal outdoorsy store, as her parents had given her money for cross-country skis for her birthday. It was late January and I asked: “So, when is your birthday, anyway?” She answered, “February”. “What day?” “What do they grow a lot of in Saskatchewan?” Does an “artificial super-intelligence” have to get that joke?


Conclusion

What other meanings of “super” are there? Have I missed any? Next time, I will not address an AI topic. Send me something else, if you can – in the comments (or if you have another way to get a hold of me, use that).

critical thinking, slider

Post navigation

Previous Post: Cyber-Security Awareness
Next Post: October 2025 Critical Links

Comments (2) on “Keith’s Conundrums: “Super” AI”

  1. Alex Berljawsky says:
    September 30, 2025 at 11:14 pm

    I would guess that what they grow a lot of in Saskatchewan is wheat, but not being an “artificial super-intelligence”, I have to admit not getting the joke.

    Reply
    1. Seanna says:
      October 1, 2025 at 12:12 am

      Alex: try saying the answer with a bit of a French accent.

      That joke fits my personal preference for humor, but certainly multilingual puns are not everyone’s cup of tea, and it’s perhaps interesting to consider that a person can “get” a joke like this without enjoying it/finding it funny. I think we are probably a long way from having AIs finding things funny, but it should be possible to evaluate if an AI can “get” a joke.

      (See also this recent paper on AI humour: https://arxiv.org/html/2502.07981v1)

      Reply

Leave a Reply to Alex Berljawsky Cancel reply

Your email address will not be published. Required fields are marked *

Donate via PayPal
Donate via Interac
Donate via CanadaHelps

Categories

a4a Announcement assistance for apostates Blasphemy Laws Blasphemy Laws CFI Community CFIC Volunteers Climate Change Cost of Religion critical links critical thinking Critical Thinking Week Debate Education Educational Material environment Event Give to CFIC governance health humanism Human Rights Information International Human Rights Living without Religion Media Advisory Medicine philosophy podcast Policy Press Release pseudoscience Quick Links quicklinks Science ScienceChek Science Literacy Secular Check Secularism Secularism in Schools Secular Rescue skeptics slider Think Check volunteer

View Full Calendar

CFI Canada is a CRA-Registered Educational Charity
Charitable Registration Number: 83364 2614 RR0001

Privacy Statement

Copyright © 2025 Centre for Inquiry Canada.

Powered by PressBook WordPress theme