Skip to content
Centre for Inquiry Canada

Centre for Inquiry Canada

Your humanist community for scientific, skeptical, secular, and rational inquiry

  • About
    • About CFIC
    • What Is CFIC?
      • Mission, Vision, & Values
      • Centre for Inquiry Globally
      • Why We Need CFIC
      • History
    • Areas of Focus
      • Secularism
      • Scientific Skepticism
      • Critical Thinking
      • Building Community
    • Our Structure
      • Governance of CFIC
      • CFIC Bylaws
      • Branches
    • Contact
    • Privacy Statement
  • Media
    • Critical Links Newsletter
    • Podcast for Inquiry
    • Search Archives
    • Videos
    • Cost of Religion Report
  • Get Involved
    • Join Us
    • Calendar of Events
    • Find a Local Branch
      • Victoria
      • Regina
      • Saskatoon
      • Winnipeg
      • Ottawa
      • Toronto
      • Montreal
      • Halifax
      • Virtual Branch
    • Volunteer
    • Mailing List
  • Donate
    • Donate to CFIC
    • CanadaHelps
    • PayPal
    • Interac Transfer
  • Become a Member
  • Toggle search form

Keith’s Conundrums: AI vs Rationality

Posted on September 25, 2024October 1, 2024 By Critical Links 1 Comment on Keith’s Conundrums: AI vs Rationality

Keith Douglas

Last time I offered a puzzle hunt. Thanks for not spoiling it; there were no comments. Was that hard? The acrostic solution is as follows (WELL DONE HOPE HOPE YOU HAD FUN) :

Wolfram

Energy

Lucky Charlie

FalL

Deuterium (or dilithium).

O (from O notation)

Noon

Epistemology

Hel (Anyone remember in the first AD&D Legends and Lore / Deities and Demigods the joke about this one?)

Laura SecOrd

P(ool)

AlbErt Einstein

MaYim Bialik

Richard Dean AndersOn (Yes, both Mayim Bialik and RDA were on The Facts of Life! This is before MacGyver for both of them.)

Umlaut

The Hardy Boys

Americium

Denali (This one was probably a bit too hard!)

For

Universe

Nanny nanny boo boo

This time

This time I am sending along some more AI discussion (groan?) This is a look at AI discussions in light of Mario Bunge’s 11 aspects of rationality. We will look at both how the systems are built this way, the surrounding social systems and the resulting implementation. These 11 aspects  are in something like a prerequisite order; 1 is presupposed by 2, etc. I am not completely sold on this or any aspect, but I found it useful to have such a well developed view around to analyze the problems of rationality. I thought about using Bacon’s famous “idols” as a negative way instead, but I find positive instructions handy to discuss too, so here we are.

1. Semantic: minimizing fuzziness in use of language.

Contemporary systems themselves don’t generally suffer from this when they are using language-like output. However, the very term “Large Language Model” is disputed and suffers from vagueness. Gary Marcus and others have pointed out that it is unclear if (e.g.) ChatGPT has a representation of language and find it unlikely that it does, for reasons we will discuss below. Consequently, there is some fuzziness around “language” itself, ironically! Moreover, “Open AI” is not anything close to “open” as understood in open source movements, etc. This is not exactly fuzzy, but also subject to other semantic confusions. (The fallacy of persuasive definition looms.)

2. Logical: striving for internal consistency, avoiding contradiction. 

One can try out many of these current systems and see them ignore logical operators (“and”, “not”) etc. and as such they are likely prone to contradiction. I want to spend some time exploring the logical properties of (say) Bing Copilot to see what can be discovered, but until I am formally authorized I regard this as too intrusive to be authorizable informally. That said, it is somewhat expected: I would guess that the common connectives occur fairly often in ordinary language and so their statistical properties are not likely to be meaningful. This critique can also be regarded as a specialized version of Chomsky’s famous argument against Skinner.

3. Dialectical: conforming to rules of deduction (and to some extent induction and abduction, to the extent that these exist). 

Most of these systems as far as I can tell do not have any representation of deduction and certainly do not induce or abduce. Some of them claim reasoning engines, but it is unclear what these include. (Cf. the remark about “open source”.) However, the companies producing them sure do hypothesize (abduce); a marketing strategy is justified by reasoning, even if in a pecuniary domain.

4. Erotetic: posing questions that make sense in a given context.

I have not used any systems that generate questions, so this is for another time. Can anyone recommend one? Bunge suggested to me in 1998 that a mark of genuine intelligence would be that the system posed questions in a relevant sort of way and also appealed to his earlier work in metaphysics to point out that children often ask interesting questions. (He tells the story of his son, who he regards, semi-jokingly, as recapitulating a question that might be attributable to Parmenides: “What is?” An interesting thing here goes beyond the mere use of language – the question of the importance of “self initiation” applies as well.

5. Methodological: questioning, justifying, using well grounded theories and background to develop procedures.

When some of these systems are asked to justify themselves they can sorta do it (Dennett’s sorta), but all input is treated as a matter of frequency not of quality (barring later tweaks) so it seems unlikely. Theories in technology include theories of action, which many of the systems cannot follow, as they are instructions for doing, not for speaking (or writing). More specialized systems (robots) might use these.

6. Epistemological: caring for factual support and discarding conjectures incompatible with background scientific and technological knowledge.

This is effectively the problem of bullshit, hallucination, etc. in the output as we have discussed in previous columns. However, it applies even more to the design, in a way. By ignoring the bulk of the tradition in linguistics (see above), these systems have not been built with the background knowledge needed for proper language use. Moreover, it is unclear how these systems work so “conjecture” might not even apply. The systems themselves are often reported as being very reluctant to withdraw or encourage deletion of text – this is why they are iffy as programming assistants. We in software development know that the Daoist / Saint-Exupery views of minimalism (…) are often worthy advice.

7. Ontological: consistent with a broadly naturalistic world view.

I imagine this largely continues the previous item. Presumably, there are religious chat bots (shades of THX 1138) and ones that want to be non-committal and maybe some explicitly nonreligious ones. But it would require work to discern how much so they are in the latter case (or if one wants the reverse, the first). I suspect that because many are so inconsistent the ontological aspect would be very hard to pin down; this illustrates the prerequisite chain idea mentioned above. It is unclear what metaphysical views the creators have or have explicitly had in mind when designing and implementing the systems. Of course, these may be overshadowed by the content used to create the artificial neural networks in question. 

8. Valuational: selecting goals that are obtainable and worthwhile.

The systems themselves embody largely the goals of the designers, but are presumably on their way to having emergent goals (as in a way any computer program of sufficient complexity does: chess programs have illustrated this for years). One can put the wold “goal” in quotation marks if one wants, but then one has to analyze when the possibility will arise; what counts as a goal seeking system that in some sense starts to have its own? As for obtainable and worthwhile, these are even trickier. I agree that AGI (for example) is both, with those are currently trying to build such things. I am not sure their means make their subtasks worthwhile, etc. For example, if the energy consumption matters are correct, then we should try to find some way to do these without catastrophically (further) imperiling the biosphere, etc. I am not sure, however, that flooding the world with output from chatbots is a good idea; already a project to study language use (wordfreq) has shut down because of this. Ironically, this project is the source of the sort of data one might want to use for that very sort of system!

9. Prohairetic: ensuring that preferences (in a given domain) are complete, reflexive and transitive.

“Complete” means that all pairs of items are comparable as to preference. “Reflexive” means that, if two items are identical, our preferences are to also be equal (this, on reading it again, is a bit strange). “Transitive” means that, if A is preferred to B and B is preferred to C then A is preferred to C. Bunge includes this dimension of rationality without regards to questions of computational complexity (for example). I am not so sanguine. I am also very unsure how to divide the domains up. Be that as they may, I am pretty sure one can get most chatbots, for example, to express different orderings even in the same conversation. A chat bot may also have an unusual preference structure in another way: it may well attempt to rank everything in one domain: a traditional one effectively does, by having one gigantic “if then” type structure at the heart of what it does (modulo perhaps some randomization).

10. Moral: committing to goals that promote individual or social welfare.

This is the alignment problem. 

11. Practical: adopting means likely to achieve an end in view.

I am not sure that one can state what ends many of these systems have “internally”. Externally, it seems for now that for many organizations their marketing and hype work very well indeed to achieve the end of raising money. At the time of writing OpenAI is talking about investments in chunks of $250 million. 

Your task: in light of AI, the following questions. Are the 11 aspects of rationality enough? Is the ordering correct? (I wonder if ontological and epistemological are out of order, for example – Bunge himself wrote his Treatise on Basic Philosophy in the other order.) What would one need to study any of the aspects further in this domain? What will happen to systems that lack them yet seem to need them? When is rationality needed anyway (in whatever aspects)?

critical links, critical thinking, philosophy

Post navigation

Previous Post: Kate Cohen believes atheists should be out and proud
Next Post: Words, Words, Words

Comment (1) on “Keith’s Conundrums: AI vs Rationality”

  1. Keith Douglas says:
    October 21, 2024 at 10:54 am

    Next month’s conundrum will deal with 2 and 3 (and maybe 1) in some detail, in a weird sort of way. Stay tuned!

Comments are closed.

Donate via PayPal
Donate via Interac
Donate via CanadaHelps

Categories

a4a Announcement assistance for apostates Blasphemy Laws Blasphemy Laws CFI Community CFIC Volunteers Climate Change Cost of Religion critical links critical thinking Critical Thinking Week Debate Education Educational Material environment Event Give to CFIC governance health humanism Human Rights Information International Human Rights Living without Religion Media Advisory Medicine philosophy podcast Policy Press Release pseudoscience Quick Links quicklinks Science ScienceChek Science Literacy Secular Check Secularism Secularism in Schools Secular Rescue skeptics slider Think Check volunteer

View Full Calendar

CFI Canada is a CRA-Registered Educational Charity
Charitable Registration Number: 83364 2614 RR0001

Privacy Statement

Copyright © 2025 Centre for Inquiry Canada.

Powered by PressBook WordPress theme