Keith Douglas
Recap
Last time I wrote about various “nothings”, and did not get any response. How appropriate, sadly.
General Intelligence
There’s a lot of hype still in the news about a race towards an artificial general intelligence. I would like to address “general” in this context. If we read (or listen) a bit further, we hear that this is supposed to mean an intelligence like found in most human beings, capable of performing intellectual tasks in the same range and depth as humans. Let’s take a look.
The first matter to realize is that humans have a very wide variety of skills; there’s an intellectual component in everything from hockey goaltending to gardening, history to horn music. But I have never met someone (though I have been fortunate to know a few who were very close indeed) who was competent across all of these domains. I had a friend in high school who described his then goals as to win an Olympic medal in judo and win a Nobel Prize in medicine. He’s done very well, but I dare say neither of those dreams were realized. So, for general intelligence, are we expecting the capabilities of the species or any one person? I’ve also known relatively less intelligent people, and also, more crucially, those whose intellectuality is unclear. Pinker reminds us that there are no statistical variations by race or sex; we are all of a species as to IQ. Incidentally, my sister (a clinical psychologist who does have to use intelligence testing) would remind us that to measure here we need certain sorts of observation of the subject and appropriate training. Aptitude tests are not intelligence tests. Consequently, it is a parlor game of sorts at best to claim that an IQ test has been given to this or that chatbot. Not being typical is not the limitation (IQ tests can be administered even to the non-verbal) but lacking, as some say, “embodiment” is.
Another form of generality occurs in the theory of what are called Turing degrees. Those of you who have encountered elementary discussions of computability theory may not have encountered the idea of asking: “What happens if we grant the ability to solve this, what else can you do?” There’s a whole hierarchy; the unsolvability of the halting problem for Turing machine-equivalent systems is the way to characterize the first (or rather the zeroth – see my previous column) level on this hierarchy. Since every machine we know how to build is at this level, this (and other reasons) has given reason to suspect that we too are at this level. But even there we have generality in one sense – roughly, the ability to simulate all other machines of the same class – but not in others. In fact, once one moves to computational complexity theory rather than computability theory, matters are different. Many machine models are similar-ish still, but they have variations when it comes to how efficiently (in time, space, etc.) certain tasks can be performed. The hope of quantum computing is that this variation can be exploited in a practical way.
Yet a running time calculation (for example) may elide still other differences. There’s an increased interest these days in the energy consumption of computing devices. Our nervous system (if it is a computing device – I think it is, but that’s a contentious conclusion in some circles) is massively more efficient than the data centers used to run and build the currently hyped systems. Why is that? Does general intelligence require energy efficiency? Arguably yes, for some tasks at least – the ones like the goaltending – that require an agent to act in the world beyond a screen or printout. A practical (or even academic) robot raises these concerns regardless of domain, and as mentioned these tasks do indeed have an intellectual component.
Or do they? Some people may regard this as intellectualizing. I don’t think it is, or at least of the harmful sort. The goaltending example is from another high school friend, who used to play at a fairly elite level – as a goaltender. He said attention, motivation, and in general a sort of mindset are what stood him apart from the other athletes who were stronger or faster but didn’t want to ever try to play goalie. He thinks this is also why goaltenders tend to be superstitious – he was thinking of Patrick Roy’s habit of talking to the goalposts before the game. Superstitions can be looked at as pattern matching – clearly part of intelligence – gone a bit amiss.
So, does an AGI have to adopt our superstitions too? Will it have its own? Pascal Boyer has claimed that religion has its origin in what psychologists call “theory of mind” – which resulted in the successful prediction that the autistic are less religious. Theory of mind is of course itself an intellectual skill – and one that shows how variable the skill set is. Oliver Sacks (An Anthropologist on Mars) quotes famous autist Temple Grandin as claiming that she has no theory of mind in the usual sense at all. This suggests again our “general for the species” theme – or not. Data on Star Trek: The Next Generation is often regarded as autistic in character, though the one-off character Tam Elbrun from “Tin Man” arguably captures some aspects that Data does not, like the sensory overload – which Data is famously the opposite of.
So, why should our species (alone?) be regarded as the one for which the intelligence is generalized? Some have suggested that cephalopods (octopuses, squids, cuttlefish …) are as close as we’ll get to “aliens on earth”. Certainly watching them suggests a mind at work that is somewhat different than our own; why shouldn’t the ability to control chromatophores be included in our notion?
Or even the ability to track a stick and catch it in our teeth as dogs do, though I imagine in this case there are some humans who can do this!
And cephalopods and humans do have an ancestor, albeit a long time ago; we share some aspects of our physiology and anatomy therefore (and some by convergent evolution – sort of – eyes!). What would happen with a creature that grew up (as my father wondered about) with liquid ammonia rather than water as a solvent?
I asked him if we could predict anything about it, and he told me something I later read about – there’s a connection between rates of reaction in the chemical sense and what might be called the psychological sense. Since liquid ammonia is possible only in (for us) extreme cold, one could hypothesize that such creatures would be incredibly slow to act, at least from our point of view. So what time scale does one have to be intelligent at to be “generally intelligent”?
None of these considerations are meant to rule out the possibility of an artificial intelligence of a very general sort – but, in my view, they tell against the idea that it will be like us, and it remains to be seen how much so. Next time, the “super” in “artificial super intelligence”.

Very informative! Thank you, Keith.
Interesting essay. The point that the energy required for AI is immensely greater than is required for our natural intelligence struck me as especially significant. How is it that our brains can develop such complex ideas as we have in advanced mathematics, or compute almost instantly the trajectory of a frisbee, using such comparatively small amounts of energy?