The solution to the puzzle hunt from the April column is the title of Bunge’s 2004 book: Emergence and Convergence. For those who want to know the specifics:
Answer Key
- Ethics
- Metaphysics
- Ecological economics
- Realism
- Geology
- Empiricism
- Negations
- Culture
- Educational psychology
- Anthropology
- Neurochemist
- Demography
- Cell biology
- Oncology
- Nematology
- Verification
- Evidence
- Rapprochement
- Geography
- Environmental studies
- Nephrology
- Conundrum
- Equivalence
I hope everyone could overlook the typo in the last item. It should read “be a(n) _____.” And I hope that you all at least got the second to last item!
On Recent AI Developments
I am, frankly, worried. Twenty years ago, if you had asked me what my worries were about this level of AI, I would have not included several aspects that I now find greatly concerning. I will phrase the concerns as critical questions and impressionistic remarks. Although I would have thought somewhat on some aspects below as a student, all of this is post-formal-higher-education stuff for me.
The conundrum that these surround is simply this: What on Earth are we to do? I hope someone can summarize some of the details I omit, though I’d be happy to expand more if people ask clarification questions.
1. Paul Churchland has said for decades (a good summary is in the 2012 Plato’s Camera) that an advantage of artificial neural networks (ANNs) for the philosophy of mind is that they show to some degree that a non-propositional cognitive architecture is nomologically possible. We are now at the point where these play chess like alien grandmasters, produce exam results that pass various tests, and are available to millions of humans. Does it matter that these are incomprehensible in how they function? Note also that explainability as sometimes used does not solve this problem: It is insufficient to merely show that a binary classifier, for example, has certain subspaces that correspond in well defined ways to the two output states.
2. Eliezer Yudkowsky has pointed out that if we take an uninterpretable optimizer and train it based on our saying “yay“ or “nay“ to its classifications, we may have it, unknowingly to us, optimize for some weird things: e.g., saying plausible but false statements about its own limitations or restrictions. How do we train systems in matters where the function to be optimized is discovered as part of the process, not created by the implementer? Does it matter that a function is determined solely by its input and output pattern?
This seems parallel to the problem of meritocracy in the political domain: What merit should you optimize? What are the failure modes in easily accessible adjacent statespace trajectories? In the ANNs and likely also in some of the political cases we may not know the answer to this. In the ANNs case it is worse, I think, because the adjacent spaces will likely again fall afoul of interpretability problems.
3. Judea Pearl has written (in The Book of Why) that he does not expect to see general AI until we focus on Bayesian networks. These are more explicit as to what they are doing and why, and are useful for understanding something that since the time of Piaget has been important in our self-understanding: causation. Is he right? Does that make one more worried about the currently fashionable approaches or less? Consider that one of the reasons to involve AI with tools for causal modeling is so they can better act in the world.
4. The present author is currently a cybersecurity professional. I for one have seen software problems (in training, in industry discussions, and directly) that make me lose sleep. What happens with the intersection of these hair-raising moments and AI systems? You may want to consider the implications of AI a) helping proliferate malware, encouraging software developers to use sloppy solutions, b) encouraging students to cheat their way through courses, c) being exploited itself in various ways, and d) flood the industry with scammers, snake oil peddlers, and fraudsters.
5. I was able to get Chat-GPT3 to espouse, and simultaneously deny, that it is a dialetheist. This raises an interesting question in the philosophy of logic. These large language models (LLMs) have learned their logic supposedly strictly empirically, by generalizations based on examples in ordinary language. Is this how humans learn theirs?
When logic instructors teach the elementary truth tables in classical logic, they will often appeal to ordinary language. They also know that “if” does not correspond directly to a connective, nor conversely. When the human instructor discusses these considerations, does the instructor assume some sort of innateness? Is this just language? Or something else?
6. The Sapir-Whorf hypothesis is likely false in its strongest form. (Steven Pinker’s 1994 classic The Language Instinct and later The Stuff of Thought from 2007 are good resources on this.) How likely is it that LLMs share the same general principles we do, and hence have enough shared background that we do wind up speaking the same language? What if, instead we create accidentally or maliciously a multilevel system where the system is really an actor – it is internally (and in some “off stage” contexts) an alien intelligence play acting as a human speaker?
I am reminded of Plato’s worry about drama and that the actors would become the person they are portraying. I think in modern terms one can put this as a question about virtual machines. Dan Dennett, the philosopher who has done the most to explore the virtual machine idea for understanding ourselves and other potentially intelligent systems, has a classic paper about what we now call, from what I understand, dissociative identity disorder (DID; formerly “multiple personality disorder”).
Dennett regards DID as an extreme form of what we are all like, and that our “selves” are partial in every case. But a virtual machine can in principle run a complete other machine – not just a partial one, at least if we pretend that all systems are Turing complete (which they really are not). This item is not so much a specific question as much as an area where I am still just at the Tintin-esque — !? — stage.
Comments on “Keith’s Conundrums: Puzzle Hunt Solutions and ChatGPT”
Comments are closed.