Complexity Miscellany for Beginners: Part III
“We see with the eyes, but we see with the brain as well, and seeing with the brain is often called imagination.” —Oliver Sacks.
It is quite common to associate complex phenomena with the emergence of structures, properties and behaviors realizable at different spatial and temporal scales, qualities that cannot be reduced to the sum of their components. To conclude this series of essays, I will now discuss four examples that will help us to better understand the concept of emergence, since I consider this is one of the most polysemic terms that complex systems science have ever developed.
Gaia
In our latest release, we have seen that the conditions that originate living beings and their evolution result from a process of adaptive self-organization; sometimes the oils of complexity denote life as something inescapable in the universe, a picturesque machinery full of simple instructions that can generate an inordinate complexity, as colossal as the one that keeps the bone vault of the human being, the one that we have been able to fertilize and irrigate fiercely throughout this collection of essays.
Despite this, today there is no sign that we are accompanied in the universe. This can raise a lot of questions in a captivated mind: Could it be that life and the planet where life inhabits have a much more subtle relationship than is apparent at first glance? This speculation is at the core of the Gaia hypothesis, published by scientist James Lovelock in 1972, which asserts that the biosphere self-regulates the planet’s conditions to make its physical environment largely hospitable so that organisms can thrive.
According to Lovelock, life itself has modified the initial conditions of our planet to its convenience in order to survive and flourish. There are at least three indications that convince us that such an assertion is legitimate. Let’s start with the most simple: in its beginnings the Sun did not produce as much energy as it does now, in fact, the Sun increases its temperature and will continue to do so. Despite the increase in energy provided by the Sun, the global temperature of the Earth’s surface remains constant, this is the first Lovelock’s certainty.
Here is the remaining evidence: just as on other planets, the atmospheric composition should be unstable, the second law of thermodynamics must bring the environment to its climax, the point of maximum entropy. On our planet that ecstasy seems to be insatiable, believe it or not, the atmospheric composition remains constant, and not only that, other variables such as the salinity of the ocean does too.
This is how the Gaia system connects the inert with the living, meteorology and geology with biology and ecology. Earth regulates itself through feedback loops, whether it is the carbon cycle or the nitrogen cycle, the magnificent multitude of interconnected cycles forms that organism in which we live. It is a fact that the planet is self-regulating and regenerating; Gaia has survived for billions of years.
Let’s make something clear: the planet is not a living entity; lacking genetic material, the Earth does not have the ability to reproduce, and this is something that most living beings fulfill to be qualified as such. Like galaxies, some planets are zones where entropy is reduced, this is the result of various feedback processes that conglomerate, producing a self-organized critical state, resembling life. Gaia-like systems could be as diverse as living beings.
Emergence
Now let’s think about ants, those tiny insects that seem insignificant to our perspective, prove to be quite captivating when looked at closely. A colony of ants can build complex structures: they secrete chemicals that serve to identify each other, they report their work and what they are doing.
Using purely chemistry, they know if there is an imbalance between the number of workers, protectors, soldiers or scouts, such that they always seek a balance so that the colony continues to grow and there is no shortage of resources. Just as colonies emerge from ants, nations flourish from humans: they can change landscapes, finance wars, grow or go into decline, and also cease to exist, but they are only the emergent property of interacting humans.
Even if it is not our intention, we are constantly creating communities, companies, cities, societies. All these entities have properties and abilities different from those that individual humans have, there is no leader within self-organized systems, they are not alive but they adapt, generate feedback loops, reduce their entropy. We don’t know why it happens, we just observe it and it seems to be a fundamental property in the universe. We call this manifestation of new properties emergence, and it could be the most beautiful and wonderful attribute of our universe.
That quality allowing us to ask questions and interpret reality—consciousness—it can also be conceived as an emergent property of the brain, one of the consequences of the interaction between its billions of neurons. We hope the reader will remember that this was our starting point, we have reached the extreme where all ideas converge and come together to form a metaconcept.
The nervous system is of great importance for any living being capable of having one, it consists of a complex network full of specialized and finely selected interactions, dense pulses of energy run through this network producing the enchanted loom that Sherrington introduced before he passed away in 1952. That structure we call “brain” makes it possible for animals to develop intelligence, anticipate the consequences of their actions and create mental models.
It is a fact that only animals possess nervous structures. If you examine the anatomy of fungi, bacteria or plants, you will not find any brain. Does not having a brain imply not having intelligence? We must consider that beings other than animals may possess a different physical structure that fulfills the same function as that of a centralized nervous system; in principle, it is evolutionarily possible.
Let’s think for a moment about plants, they don’t have eyes but they are able to perceive light, recognize the different times of the day, and prepare themselves to receive the seasons of the year. They have touch and taste receptors, distinguish between fifteen different chemical compounds in the soil and measure their concentration gradients, all at the same time. Members of the kingdom plantae can sense gravity, magnetic fields and apparently have a very fine sense of smell, as they have more than two thousand scent receptors.
Plants also exhibit a certain type of polyglotism, as they communicate with each other and with agents of other species; they can share resources through their roots and help other plants to grow. That fragrance of freshly cut grass is a stress signal to ward off predators, alerting other plants to danger. Plant neurobiology is already a fact, and as this field of research grows we are discovering events that make us doubt whether plants only perceive the environment or whether they are capable of interpreting it.
As we have seen in this trilogy, plants, Gaia, and even galaxies have much in common with us: they self-organize, adapt, generate feedback loops, and reduce their entropy to maintain themselves in a state of homeostasis and allostasis, which is is the active, dynamic process of achieving stability through change, adapting to stress or environmental demands by adjusting physiological parameters (e.g., heart rate, cortisol) to match new, often fluctuating, conditions, leading to the maintenance of a relative constancy in the composition and properties of the internal environment of an organism, but it is still a philosophical discussion whether plantae individuals are conscious or not.
Artificial Intelligence
It seems that in nature there is no being more aware of itself than the human being, so perhaps now we can proceed to ask whether our species is capable of creating an entity as conscious as ourselves. Before us, the most sophisticated computer seems to be just an automated abacus; computers are able to solve multiple numerical calculations, formulate predictions to quite complicated phenomena, and translate into a considerable number of living and dead languages, so are machines smarter than us? Not necessarily.
The myriad interactions, modified by learned changes, produce a uniqueness unique to us, an evolutionary miracle, a burst of self-organization. Various authors have explored why artificial general intelligence is much more elusive; what we imagine or have been led to believe is that, within a few years, we will be able to manufacture something like the positronic brains Isaac Asimov describes in his stories. As Melanie Mitchell has explained in her recent manuscript, there are at least four fallacies about artificial intelligence, and here is a brief explanation of each of them.
- “We are following the right path”: To suppose that having developed tools such as Machine Learning, and more recently Large Language Models, will lead us to create artificial consciousness is almost as absurd as believing that if monkeys are able to climb tall trees, then they will ever reach the Moon someday.
- “Simple things are easy and complicated things are hard”: We have already listed above some of the tasks that any computer can perform today. It is quite easy to make computers show adult-level performance in “intelligence-requiring” tasks. In contrast, it is difficult (or maybe even impossible) to give them the skills of a small child in terms of perception and mobility.
- “Exaggerating our terminology”: We like to oversell terms such as “artificial intelligence”, “neural networks,” or “machine learning”. We should not confuse the performance of a computer in analyzing a dataset with the possession of an underlying skill.
- “All intelligence is in the brain”: Embodied cognition paradigm confirms that the representation of conceptual knowledge depends on the whole body; our thoughts are rooted in, or inextricably associated with, perception, action and emotion. In short, our brain and body work together to create intelligence.
It is clear that we are far from achieving an authentic artificial intelligence, one that is truly self-aware. We do not even know if we will ever be able to achieve such a design. This purpose is based on several assumptions that in one way or another lead to the four fictions listed above. We assume that the mind resides in the structure, disposition and biochemistry of the brain.
The idea that everything that makes up the mind is in the brain is called physicalism. Using advances in neuroscience we cannot confirm whether at some point, we will understand the brain well enough to create a synthetic nervous system. We overestimate our technological advances and take for granted that we will have technology capable of simulating all aspects of the mind. We dream of scanning consciousness and making a copy of it.
The following quote is from Roger Penrose, Nobel Prize winner in physics in 2020: “Self-consciousness, that which allows us to understand ourselves as a being and to have intelligence, is a non-computable process that takes place in the brain”. Perhaps the above note describes the biggest mistake of human beings in wanting to imitate consciousness computationally, we imagine that a computer program can house the mind, this is what Penrose is referring to when he uses the word computable.
Brains
The human brain has evolved from inside out, its structure reflecting the biological transformations it has undergone. It is in this place where information—which travels in the form of energy—is stored, inter-compared, synthesized and analyzed, leading us to abstractions and lucid dreams. This gelatinous mass, which we could well hold in our hands, is capable of contemplating the immensity of interstellar space. Even the simplest thought—such as the concept of number— has an elaborate logical basis, upon which we give reality structure and coherence.
Until a few years ago, neuroscientists thought that in order to create predictions based on perception, information traveled in the brain “bottom-up” through a hierarchy of layers, whose function was to abstract sensory signals. Recent research has shown that the flow of electricity does not follow a linear trend, the communication traffic is distributed in such a way that feedback between neurons coexists, this feature allows the brain to be energy efficient and has inspired new architectures in the world of artificial neural networks.
One of the major difficulties in analyzing brain dynamics in vivo is the enormous amount of nonlinearity in the response signals. A few years ago it was shown that this thunderous din is rather a harmonious brouhaha that holds valuable information about the brain and its functions, encrypted in what we know today as pink noise. That shrill din can help us to characterize the various neuronal structures and their connectivity, as well as to understand the deep-rooted fractal behavior of the nervous system.
The brain is often drawn as a globe, with a surface divided into distinct regions for each brain function. Such partitions reflect a wealth of experimental data, but they also indicate that separable categories emerge from our subjective experience. Some studies indicate that we need to rethink the way we think about how the brain works, as there are areas classified as “specific” that may contribute to other cognitive processes. This provides us with a new landscape for studying emergent properties, such as intelligence and language.
We usually imagine neurons as connected nodes in a network, but due to the high density of agents, the reality is that these cells form a continuum from which it is difficult to distinguish one from another. Unlike circuits and logic gates, neuronal architectures are often diverse in physiology and function. The brain is not a computer, but rather an autopoietic structure capable of producing multiple states of consciousness. A paradigm shift in the modeling of brain components is necessary if we are to apprehend a modicum of the complexity inherent in this machinery.
Several experiments have proven that various phenomena such as the opening of ion channels, bursts of electrochemical activity—known as neuronal avalanches—and communication between various parts of the brain follow a power law, therefore, we can conjecture that collective brain dynamics operate near the critical point of a phase transition. In the criticality state, the brain’s ability to process information is much better, so the subcritical, critical and supercritical branching process in thoughts could describe how human and animal minds function.
Although the tools described in the three volumes of this collection of writings are useful to study the brain machinery, it is clear that we still cannot fully understand it. This marvelous organ has inspired several new concepts such as liquid brains, which is useful to study complex systems with mobile agents, some examples of which are the immune system, capital traffic and animal collective dynamics.
Understanding the human brain and its self-organization is useful if we wish to delve into global phenomena such as the collective unconscious and universal grammar. Even without knowing if it is possible to reach all the knowledge that exists, we enjoy approaching reality and understanding it a little more with each passing day. Perhaps this is one of the most beautiful and valuable characteristics of our species. As long as there are still fertile minds like yours, we still have hope to get infinitesimally closer to the truth. In complexity we trust.
