Reprinted with permission from The Book of Minds: How We Understand Ourselves and Other Beings, From Animals to Artificial Intelligence to Aliens by Philip Ball, published by University of Chicago Press. © 2022 by Philip Ball. All rights reserved.
In 1984 computer scientist Aaron Sloman of the University of Birmingham in England published a research paper calling for more systematic thinking about the mysterious but intuitive concept of the mind. It’s time, he said, to acknowledge in the conversation what we’ve learned about animal cognition, as well as what research on artificial intelligence and computer systems is telling us. Sloman’s paper was titled “The Structure of Possible Mind Space”.
“It is clear that there is not only one kind of mind,” he wrote:
In addition to the obvious individual differences between adults, there are differences between adults, children of different ages, and infants. There are differences between cultures. There are also differences between humans, chimpanzees, dogs, mice, and other animals. There are differences between all of these and machines. The machines are also not all the same, even when made on the same production line, because identical computers can have very different characteristics if different software is fed. “
Now Professor Emeritus, Solomon is the kind of academic who can’t be boxed. His ideas bounce from philosophy to information theory to behavioral science, along a path that will leave fellow travelers dizzy. Ask him a question and you are likely to find yourself far from the starting point. He can seem dismissive, and even hopeless, of other efforts to contemplate the mysteries of the mind. “Many facts are ignored or not noticed, either because researchers do not understand the concepts needed to describe them, or because the kinds of research needed to investigate them are not taught in schools and universities,” Lee told me.
But Solomon shows deep humility about his attempt four decades ago to broaden the discourse in mind. He thought his 1984 paper had barely scratched the surface of the problem and had little effect. “My impression is that my thinking on these matters has been largely ignored” he says — understandably, “because making real progress is very difficult, time-consuming and risky in the current climate of constant evaluation by quoting charges, funding and new offerings.”
But he is wrong about that. Several researchers at the forefront of artificial intelligence are now suggesting that Sloman’s paper had a stimulating effect. Its mix of computer science and behavior must have seemed eccentric in the 1980s, but today it seems surprisingly prescient.
He wrote, “We must abandon the idea that there is only one main boundary between things with minds and without minds.” “Instead, based on the variety of computational mechanisms already explored, we must admit that there are many discontinuities or divisions within the space of possible systems: space is not a continuum, nor is it a dichotomy.”
Part of the task of mapping potential brain space, Sloman said, was to survey and categorize the kinds of things that different kinds of minds can do:
“This is a classification of different types of abilities, abilities or behavioral dispositions—remember that some behavior may be intrinsic, eg facial recognition, problem solving and poem appreciation. Different types of minds can then be described in terms of what they can and cannot do” .
The task is to explain what enables different minds to acquire their distinctive abilities.
Subscribe to get unexpected, surprising and touching stories delivered to your inbox every Thursday
Sloman wrote: “These explorations can be expected to reveal a highly structured space, not one-dimensional, like a spectrum, and not any kind of sequence. There will be not two but many extremes.” These mechanisms may range from mechanisms so simple—such as thermostats or speed controls on motors—that we don’t conventionally liken them to brains at all, to the kinds of advanced, responsive and adaptive behavior exemplified by simple organisms such as bacteria and amoebas. He wrote: “Instead of fruitless attempts to divide the world into things with the essence of mind or consciousness and things without, we should examine the many detailed similarities and differences between the systems.”
This was a project (among others) of anthropologists, cognitive scientists, behavioral scientists, computer scientists, philosophers, and neuroscientists. Sloman felt that AI researchers should focus less on the question of how close artificial cognition is to humans, and more on learning how cognition evolved and how it manifests in other animals: squirrels, weaver birds, corvids, elephants, and orangutans. Cetaceans, spiders, etc. He said, “Current AI throws increasing memory, speed, and increasing amounts of training data to solve a problem, allowing progress to be reported with little understanding or repetition of natural intelligence.” In his opinion, this is not the correct way to do it.
Although Sloman’s concept of potential brain space has prompted some researchers to think about intelligence and how it might be created, mapping has not yet begun. The related disciplines he mentioned were too far apart in the 1980s to achieve much in common, and in any case at that time we were only beginning to make progress in unraveling the cognitive complexities of our minds. In the mid-1980s, companies’ interest in so-called AI research into the expert system quickly dissipated, leading to a lull that lasted until the early 1990s. The concept of “machine minds” has become widely considered an exaggeration.
Now the wheel has turned, and there’s never been a better time to think about what “Mindspace” might look like for Solomon. Not only is AI finally beginning to prove its worth, but there is a widespread perception that further improvements – and perhaps even the creation of a kind of “artificial general intelligence” with human-like capabilities, as the field’s founders envisioned it – will require a close look at how different minds Today’s supposed machine on our minds.