Skip to content
Centre for Inquiry Canada

Centre for Inquiry Canada

Your humanist community for scientific, skeptical, secular, and rational inquiry

  • About
    • About CFIC
    • What Is CFIC?
      • Mission, Vision, & Values
      • Centre for Inquiry Globally
      • Why We Need CFIC
      • History
    • Areas of Focus
      • Secularism
      • Scientific Skepticism
      • Critical Thinking
      • Building Community
    • Our Structure
      • Governance of CFIC
      • CFIC Bylaws
      • Branches
    • Contact
    • Privacy Statement
  • Media
    • Critical Links Newsletter
    • Podcast for Inquiry
    • Search Archives
    • Videos
    • Cost of Religion Report
  • Get Involved
    • Join Us
    • Calendar of Events
    • Find a Local Branch
      • Victoria
      • Regina
      • Saskatoon
      • Winnipeg
      • Ottawa
      • Toronto
      • Montreal
      • Halifax
      • Virtual Branch
    • Volunteer
    • Mailing List
  • Donate
    • Donate to CFIC
    • CanadaHelps
    • PayPal
    • Interac Transfer
  • Become a Member
  • Toggle search form

Keith’s Conundrums: Puzzle Hunt Solutions and ChatGPT

Posted on April 21, 2023March 31, 2024 By Critical Links 2 Comments on Keith’s Conundrums: Puzzle Hunt Solutions and ChatGPT

The solution to the puzzle hunt from the April column is the title of Bunge’s 2004 book: Emergence and Convergence. For those who want to know the specifics:

Answer Key

  1. Ethics
  2. Metaphysics
  3. Ecological economics
  4. Realism
  5. Geology
  6. Empiricism
  7. Negations
  8. Culture
  9. Educational psychology
  10. Anthropology
  11. Neurochemist
  12. Demography
  13. Cell biology
  14. Oncology
  15. Nematology
  16. Verification
  17. Evidence
  18. Rapprochement
  19. Geography
  20. Environmental studies
  21. Nephrology
  22. Conundrum
  23. Equivalence

I hope everyone could overlook the typo in the last item. It should read “be a(n) _____.” And I hope that you all at least got the second to last item!

On Recent AI Developments

I am, frankly, worried. Twenty years ago, if you had asked me what my worries were about this level of AI, I would have not included several aspects that I now find greatly concerning. I will phrase the concerns as critical questions and impressionistic remarks. Although I would have thought somewhat on some aspects below as a student, all of this is post-formal-higher-education stuff for me.

The conundrum that these surround is simply this: What on Earth are we to do? I hope someone can summarize some of the details I omit, though I’d be happy to expand more if people ask clarification questions.

1. Paul Churchland has said for decades (a good summary is in the 2012 Plato’s Camera) that an advantage of artificial neural networks (ANNs) for the philosophy of mind is that they show to some degree that a non-propositional cognitive architecture is nomologically possible. We are now at the point where these play chess like alien grandmasters, produce exam results that pass various tests, and are available to millions of humans. Does it matter that these are incomprehensible in how they function? Note also that explainability as sometimes used does not solve this problem: It is insufficient to merely show that a binary classifier, for example, has certain subspaces that correspond in well defined ways to the two output states.

2. Eliezer Yudkowsky has pointed out that if we take an uninterpretable optimizer and train it based on our saying “yay“ or “nay“ to its classifications, we may have it, unknowingly to us, optimize for some weird things: e.g., saying plausible but false statements about its own limitations or restrictions. How do we train systems in matters where the function to be optimized is discovered as part of the process, not created by the implementer? Does it matter that a function is determined solely by its input and output pattern?

This seems parallel to the problem of meritocracy in the political domain: What merit should you optimize? What are the failure modes in easily accessible adjacent statespace trajectories? In the ANNs and likely also in some of the political cases we may not know the answer to this. In the ANNs case it is worse, I think, because the adjacent spaces will likely again fall afoul of interpretability problems.

3. Judea Pearl has written (in The Book of Why) that he does not expect to see general AI until we focus on Bayesian networks. These are more explicit as to what they are doing and why, and are useful for understanding something that since the time of Piaget has been important in our self-understanding: causation. Is he right? Does that make one more worried about the currently fashionable approaches or less? Consider that one of the reasons to involve AI with tools for causal modeling is so they can better act in the world.

4. The present author is currently a cybersecurity professional. I for one have seen software problems (in training, in industry discussions, and directly) that make me lose sleep. What happens with the intersection of these hair-raising moments and AI systems? You may want to consider the implications of AI a) helping proliferate malware, encouraging software developers to use sloppy solutions, b) encouraging students to cheat their way through courses, c) being exploited itself in various ways, and d) flood the industry with scammers, snake oil peddlers, and fraudsters.

5. I was able to get Chat-GPT3 to espouse, and simultaneously deny, that it is a dialetheist. This raises an interesting question in the philosophy of logic. These large language models (LLMs) have learned their logic supposedly strictly empirically, by generalizations based on examples in ordinary language. Is this how humans learn theirs?

When logic instructors teach the elementary truth tables in classical logic, they will often appeal to ordinary language. They also know that “if” does not correspond directly to a connective, nor conversely. When the human instructor discusses these considerations, does the instructor assume some sort of innateness? Is this just language? Or something else?

6. The Sapir-Whorf hypothesis is likely false in its strongest form. (Steven Pinker’s 1994 classic The Language Instinct and later The Stuff of Thought from 2007 are good resources on this.) How likely is it that LLMs share the same general principles we do, and hence have enough shared background that we do wind up speaking the same language? What if, instead we create accidentally or maliciously a multilevel system where the system is really an actor – it is internally (and in some “off stage” contexts) an alien intelligence play acting as a human speaker?

I am reminded of Plato’s worry about drama and that the actors would become the person they are portraying. I think in modern terms one can put this as a question about virtual machines. Dan Dennett, the philosopher who has done the most to explore the virtual machine idea for understanding ourselves and other potentially intelligent systems, has a classic paper about what we now call, from what I understand, dissociative identity disorder (DID; formerly “multiple personality disorder”).

Dennett regards DID as an extreme form of what we are all like, and that our “selves” are partial in every case. But a virtual machine can in principle run a complete other machine – not just a partial one, at least if we pretend that all systems are Turing complete (which they really are not). This item is not so much a specific question as much as an area where I am still just at the Tintin-esque — !? — stage.

critical thinking, philosophy Tags:artificial intelligence, critical thinking

Post navigation

Previous Post: Urban Sprawl: Paving Paradise
Next Post: Bill 98: Just a First Step

Comments (2) on “Keith’s Conundrums: Puzzle Hunt Solutions and ChatGPT”

  1. Pingback: May 2023 Critical Links – Centre for Inquiry Canada
  2. Pingback: Keith’s Conundrums: ChatGPT Continued – Centre for Inquiry Canada

Comments are closed.

Donate via PayPal
Donate via Interac
Donate via CanadaHelps

Categories

a4a Announcement assistance for apostates Blasphemy Laws Blasphemy Laws CFI Community CFIC Volunteers Climate Change Cost of Religion critical links critical thinking Critical Thinking Week Debate Education Educational Material environment Event Give to CFIC governance health humanism Human Rights Information International Human Rights Living without Religion Media Advisory Medicine philosophy podcast Policy Press Release pseudoscience Quick Links quicklinks Science ScienceChek Science Literacy Secular Check Secularism Secularism in Schools Secular Rescue skeptics slider Think Check volunteer

View Full Calendar

CFI Canada is a CRA-Registered Educational Charity
Charitable Registration Number: 83364 2614 RR0001

Privacy Statement

Copyright © 2025 Centre for Inquiry Canada.

Powered by PressBook WordPress theme