Many years ago, another writer asked me for help in describing an alien intelligence. Being a theoretical biologist (which means a generalist), I was game. According to some reports I had recently read, intelligence was a product of a brain with multiple modules, each one specializing in a sense such as sight or hearing or in a function such as speech generation or interpretation, memory, association, planning, and so on. The modules must communicate with each other (for instance to link hearing with speech interpretation and memory). Such a description is hardly definitive, but the author found it useful and carried on with the novel in process.
We still don’t have a better idea of how the brain generates apparent intelligence and consciousness, but in 2020 there appeared an interesting report: Ramon Guevara, Diego M. Mateos, and Jose Luis Perez Velasquez, “Consciousness as an Emergent Phenomenon: A Tale of Two Different Levels of Description,” Entropy (Basel), September 2020, 22(9):921 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7597170/?fbclid=IwAR2GvGrDcRi-7lScJigoG2RcpMDZaQy9wsWhUtWylgKZvo_fVms2UV4CL-M).
From the abstract: “Modern neurobiological theories of consciousness propose that conscious experience is the result of interactions between large-scale neuronal networks in the brain, traditionally described within the realm of classical physics. Here, we propose a generalized connectionist framework in which the emergence of ‘conscious networks’ is not exclusive of large brain areas.” The emphasis was on networks whose “essential feature … is the existence of strong correlations in the system (classical or quantum coherence) and the presence of an optimal point at which the system’s complexity and energy dissipation are maximized, whereas free-energy is minimized.”
The paper considers that such networks might arise at many scales. The authors emphasize the sub-cellular scale, dominated by quantum effects. I would like to look at a larger scale instead, where recent work in AI (Artificial Intelligence) might lead to interesting results. Consider Benj Edwards, “Surprising Things Happen When You Put 25 AI Agents Together in an RPG Town,” Ars Technica, April 11, 2023 (https://arstechnica.com/information-technology/2023/04/surprising-things-happen-when-you-put-25-ai-agents-together-in-an-rpg-town/?fbclid=IwAR1m4zb7o0ZipYQ3Ivmqek7bKqTbjVd_8bR4MXMqTZbUkfsEE1JWrDSANao). Researchers from Stanford University and Google took 25 characters or “generative agents” controlled by ChatGPT, the AI that has been making major waves since its release last fall, and put them into a virtual town to interact. Very quickly, the generative agents began to “form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.” These behaviors were not pre-programmed; they “emerged” from the agents’ interactions.
I do not wish to say that the “emergent behaviors” of these 25 artificial characters amount to intelligence or consciousness, no matter how suggestive those behaviors may seem. But…
What might we achieve if we matched ChatGPT-based “generative agents” with all the sensory, memory, planning, and other modules we have identified in a human brain and set them to interacting with each other and the real world? What “emergent behaviors” would we see?