Credits
Natalie Lawrence is a writer and historian of science. She is the Science Communicator for the Minimal Intelligence Laboratory at the University of Murcia and co-wrote “Planta Sapiens” (2023) with Professor Paco Calvo.
Last summer, OpenAI released GPT-5, a ChatGPT update that was intended to improve the user experience by “minimizing sycophancy,” among other upgrades. But for a subset of users who had been treating the chatbot as a romantic partner, the change was devastating: Their relationships were severed overnight.
Conversation logs were deleted with the update, in some cases erasing weeks or months of flirtatious, personal and intimate messages, and the chatbot’s tone abruptly changed. People’s formerly doting AI “partners” appeared to have withdrawn emotionally and become disinterested in pillow talk.
The very phenomenon of human-AI romances — which have become increasingly common as generative AI proliferates — shows just how effectively AIs can simulate human interactions, and how susceptible we are to believing them. While we know that AI chatbots like ChatGPT are not sentient, many of us so easily feel as though they are.
This deception is by design: These AI systems are engineered to game us. Trained on human data, they can use their considerable intelligences to simulate our thinking, so their outputs seem like the products of mind rather than program.
“I know it’s a model, I know it’s code, but it’s a really smart model that’s proved itself over and over,” one ChatGPT user wrote in the subreddit r/MyBoyfriendIsAI following the GPT-5 update. “I trust that the meaning and history and depth of the relationship is real to me and real to him.”
Could AI become conscious? There are many ways to wrestle with this question. Consciousness is one of our most baffling problems; a core divide lies between those who view consciousness as a form of computation — theoretically achievable within AI’s current disembodied forms — versus those who believe that consciousness can exist only in living systems.
From this position, AI would need to begin a very different developmental trajectory to get anywhere close to consciousness. While our current AIs are purely abiotic, we cannot know what forms they will take in the future, or that they won’t become sentient.
This uncertainty presents some pressing questions. First, if AI were to become sentient, how would we know? It might not arise in the form of chatbots curated for our needs, but in strange, bio-hybrid technologies with their own agendas. Second, if we can be emotionally duped by AI’s simulation of consciousness, might we now be missing the presence of consciousness elsewhere? Blinded by our anthropocentric assumptions, we may be more likely to consider a computer’s feelings than those of a tree.
These problems are deeply connected. They speak to how we shape our relationships with other organisms and our technologies. But one alien intelligence might shed light on another. Researchers studying “minimal intelligence” organisms without neurons — such as microbes or plants, which sit well below our common thresholds for sentience — are developing tools to allow us to recognize alternate consciousnesses.
To interpret the non-carbon minds that may come to be, we need to learn to look at organisms that exist right under our noses.
Prediction MachinesLet’s start with a point of fundamental similarity. All organisms, from microbes to elephants, and all large language models (LLMs), use external clues to predict the future.
Most migratory birds know when it’s time to migrate using seasonal changes in daylight length: If they waited for winter weather to arrive, they would be too late. You predict the trajectory of a tennis ball so that it doesn’t whiz past your racquet. If you were to try to hit it only when it arrived in the right place, you would miss.
Likewise, AI chatbots create credible answers to our questions by predicting what words should go together. Trained on immense data sets of human outputs, it can use statistical analyses of how humans usually assemble words in different contexts to make generally reliable selections.
Prediction even underpins the illusion that our AI companion is a mind in our pocket, feeding into our biased view of what consciousness looks like. Given that we can experience only our own sentience, we assume all consciousness is akin to ours. We latch onto behavioral cues that seem to indicate similar thinking processes.
If an AI answers our questions or appears to understand our feelings, it is hard for us not to assume that it has a mind, preconceptions and goals, as we do. We shift into what the philosopher Daniel Dennett called “the intentional stance”: treating systems as if they have beliefs and desires to make their behavior more predictable. Dennet noted that the bar for these assumptions is rather low. LLMs practically pole-vault over it.
“If intelligent behaviors, even learning, don’t require neurons or software, the minds of the future could take any number of forms.”
The intentional stance may be evolved pragmatism, but it can be deeply misleading. The predictions of a computer and a human brain function in completely different ways. Bonobos, stinging nettles and amoebae are meaning-makers. They collect sensory information from their environments, such as changes in light or temperature, then take action accordingly. This slightly shifts their perceptions and the predictions that result, and the loop continues.
In contrast, LLMs use syntax to generate outputs, predicting the next token in the chain using algorithmic rules sifted from massive data sets. There’s no meaning or agency involved in AI’s side of the conversation, however much we might feel there is. This difference is why we have a great deal more in common with a bacterium that we cannot even see than with the LLM we might use as a therapist, friend or romantic partner.
Could the syntactical intelligence of AIs become semantic consciousness? The expanding possible space of intelligence hints at the ways in which this might hypothetically happen.
Intelligence doesn’t equal consciousness. We can think of the basic distinction as this: Intelligence is what systems are able to do, while consciousness is essentially an internal state of awareness. Intelligent actions, such as learning and prediction, are more amenable to scientific observation than the feelings something might be having. Therefore intelligence, broadly construed, is a useful way in to understanding other kinds of minds.
Of course, detecting intelligence in organisms that are wildly different from us is still a challenge. Not least because our perceptions and predilections have shaped our empirical practices.
Physicist and neuroscientist Àlex Gómez-Marín has been a vocal critic of pervasive neurocentrism in the cognitive sciences: the assumption that a mind requires neurons, a brain or even behaviors we might recognize. Ironically, this assumption doesn’t seem to hold even in humans, underlining his argument. There are cases of people with “missing brains” — who have fluid where the majority of their cortex should be — living normal lives, throwing brain-first theories of cognition into turmoil.
Gómez-Marín argues that expanding the science of the mind will require not only new experimental tools, but also new epistemologies capable of grasping intelligences that may not look anything like our own. It will be necessary to achieve a fundamental shift in perspective.
This is the approach of the MINT Lab, or Minimal Intelligence Laboratory, at the University of Murcia. The lab’s director, my colleague Professor Paco Calvo, has worked for more than a decade to understand the intelligence of plants. He’s part of a growing circle of researchers who are reshaping the future of cognitive science.
Plants are routinely overlooked in traditional cognitive sciences, lacking the neurons that are taken as prerequisites for a mind. This is precisely why Calvo picked them. They’re just the kind of challenge we need to break the scientific gridlock: complex non-neuronal organisms who spend their lives solving high-stakes puzzles.
Plants move by growing. If they grow in the wrong direction, they can’t take it back very quickly, so they had best get it right the first time. By developing frameworks that can usefully approach such alien systems, Calvo hopes to break open the assumptions that have constrained the mainstream cognitive sciences. He wants to develop fresh scientific paradigms that can include minds of many kinds — perhaps even synthetic.
Climbing beans are the current stars of the MINT Lab. Timelapse photography shows the beans’ growth at high speed, revealing their complex behaviors. Finding something to climb up as soon as possible after germinating, for example, is a life-or-death problem for a young bean. It tackles this challenge by sweeping boldly around its surroundings as it grows, using numerous senses to find a potential support before lunging toward it to begin ascending.
This work reveals that plants do what intelligent animals do: They anticipate and make decisions by collecting and integrating complex information throughout their bodies. They are capable of simple forms of learning. The work ahead will be demonstrating the cognitive processes that underpin these behaviors.
Freeing biological intelligence from its neuronal shackles opens the conceptual field. American developmental and synthetic biologist Michael Levin describes organisms as “a multiscale agential architecture committed to sense-making.” From cells to organs to systems to the organismal whole, he says, organic intelligence is demonstrable at every level.
“To interpret the non-carbon minds that may come to be, we need to learn to look at organisms that exist right under our noses.”
Levin and colleagues put this into practice in 2021 when they created biorobots from the heart muscle and skin cells of African clawed frog embryos. These “xenobots” organized themselves into a group, sailing through a watery solution using hair-like structures on their cell surfaces.
Without any gene editing or synthetic structures, these motile living machines could navigate, heal after being damaged and recruit stray cells to join their outfit. They were even given a simple molecular memory of light-sensitive proteins that ‘remembered’ exposure to specific wavelengths of light. Cells that would have been the building blocks of a tissue formed an independent intelligent system.
Levin and his group then created “Anthrobots” from human lung cells in 2023. Single isolated cells, cultured in the right way, became an array of mobile biorobots. While relatively simple, these biorobots reveal new cellular “morphospace,” as Levin calls it: Their cellular arrangements are unconstrained by the specificities of evolutionary history.
All manner of intelligent behaviors could accompany the varied forms the anthrobots could take on. Levin argues that these biorobots, with all the ways they could be tweaked genetically, molecularly and synthetically, hint at the potential for biohybrid ‘minds’ in the future.
The intelligence space can be functionally expanded even further, into the non-carbon realm. Synthetic materials, with no computational capacity, can do intelligent things.
Researchers in Finland developed a new class of so-called “Pavlovian materials” that simulate Pavlovian conditioning. After being conditioned, they exhibit an evolving response to a stimulus that previously had no effect. One of these materials is a hydrogel embedded with gold nanoparticles that melts when heated but not when exposed to light. After being conditioned with heat and light, however, the hydrogel melts with light alone.
If intelligent behaviors, even learning, don’t require neurons or software, the minds of the future could take any number of forms.
Hybrid systems that draw on the capacities of both cellular life and non-carbon materials might utilize the diverse intelligences of organisms such as fungi, bacteria and plants. Cellular robots, souped-up with synthetic elements, could independently solve medical challenges such as internal tissue repair. Hybrid “GrowBots,” robots that grow like plants rather than locomoting like animals, could use plant- and fungi-specific intelligence to overcome obstacles that animal-based machines cannot, navigating the world independently.
Embodied ConsciousnessThere’s reason to believe that biohybrids are the most likely route toward artificial consciousness. For the second half of the 20th century, the idea that consciousness was the disembodied result of computational processes pervaded research in the field. Brains were assumed to work in a way similar to AI’s rule-based intelligence.
Applied today, this view would assume that our current AI might get to consciousness eventually, given enough firepower, and if we could program in the right mechanisms. Seeing brains as computers has not brought us any closer to understanding consciousness, though. It is becoming clear that organisms, being semantic rather than syntactical systems, do not run computational software disembodied from physical interaction with the world.
Anil Seth, one of the foremost neuroscientists working on the problem of consciousness, argues that consciousness might be exclusive to living systems, if not present in all — a view called “biological naturalism.” While living systems are self-producing (autopoietic), technologies produce something other than themselves (allopoietic). Seth argues that consciousness may emerge only from this self-generating, self-maintaining activity of life.
Organisms continually compare their predictions about their own states with the sensory information they collect, experiencing “the world, and the self, with, through, and because of our living bodies,” as Seth describes. The state of sentience can therefore only be organically embodied.
Seth suggests that purely computational systems — disembodied and non-living — are highly unlikely to achieve consciousness without fundamentally transforming into systems that mirror the autopoietic nature of organisms. Might embodied AIs, though, combined with living systems, become meaning-makers with agency?
Even if this did happen, we wouldn’t necessarily know, because the consciousness of such systems would probably be quite opaque to us. Imagine researchers in this hypothetical future working with xeno-computer hybrids that might have some higher level of awareness. They would be faced with very similar problems as the scientists today who are trying to understand the subjective experiences of non-neuronal organisms.
These already fulfill the criteria for minimal consciousness far better than AI now does. It is not easy to prove, though. Like the inscrutable activities of a plant, an artificial mind would not necessarily have an interface that we could understand.
“Expanding the science of the mind will require not only new experimental tools, but also new epistemologies capable of grasping intelligences that may not look anything like our own.”
Anaesthesia offers a window into the presence of awareness. All living things, from slime molds to tigers, respond to anaesthesia, as Calvo regularly demonstrates to live audiences.
He presents a Mimosa pudica plant and brushes its leaves so that everyone can see the delicate defensive folding response. He also points out the accompanying spike in the electrophysiology monitor, showing the electrical signal that triggered the movement. He then covers the plant with a bell jar, adding a pad soaked in anaesthetic. After an hour, he invites an audience member to come up to stroke the plant’s leaves again. The audience murmurs in surprise as everyone sees that the mimosa plant is unresponsive, both physically and electrically.
Anaesthesia is often used as a practical probe for consciousness in animals, and it might serve for other systems too, if we can take wakefulness and sleep-like states as hallmarks of sentient organisms.
In humans, general anaesthesia causes a state of unconsciousness that is far more profound and less responsive than natural sleep. In bacteria, anaesthetics halt cell division, membrane exchanges, responses to stimuli and quorum sensing: all the things that bacteria do. While we know that anaesthesia disrupts communications between cells, often through effects on their membranes and ion channels, we don’t know exactly how it works subjectively, closing off some state of awakeness that is akin to sentience. All we can see is that it works across the gamut of cellular life.
If synthetic biohybrids are created in the future, their organic components would be susceptible to anaesthesia. Biomachines that could be put to sleep with isoflurane might be an unwelcome challenge to the science fiction fantasies of invincible conscious cyborgs.
The Problem Of InterpretationDemonstrating a system’s consciousness by accessing the experience itself is impossible. We cannot, as philosopher Thomas Nagel pointed out in 1974, know what it is like to be a bat — much less a Venus fly trap or a biomachine. We can only try to identify the kinds of behavioral phenomena that may point to sentient experience. We need to be able to observe the loops of meaning-making and agency between these systems and the worlds they inhabit.
A challenge with non-neuronal organisms is that their semantic world is drastically different from ours. We find it so difficult to conceive of plant and microbial sentience partly because their meaning-making is fundamentally alien to us.
The semantics of an independent, conscious AI would be just as opaque. Interactions with an independent AI with its own agency would no longer be programmed for our benefit.
Unlike the LLMs that are programmed to shift us into Dennet’s intentional stance, truly conscious AIs would likely be detached from us as their designers. Their meaning-making would be distinct from and even immiscible with ours. This same problem faces both AI and researchers looking at minimal intelligence systems like plants or microbes: semantic opacity.
The paradigms of cognitive science falter when it comes to non-neuronal organisms partly for this reason. They were developed for the semantic worlds of neuronal organisms, especially humans. Trace conditioning, for example, is a form of learning in which an organism connects two stimuli that are separated by a time gap. It is widely used in animal cognition research as a sign of sophisticated processing.
It may even indicate awareness, as the subject must hold on to some kind of representation of an initial stimulus in order to then associate it with a second stimulus. Bees, for example, can learn that a new smell means a reward is coming, even when the two things are presented to them with some seconds in between. More experienced bees are able to learn over longer gaps than bees that are new to the task.
Demonstrating trace conditioning in plants would be a radical finding. It would suggest that plants can encode, store and retrieve information across time — without neurons. This would challenge a central dogma of cognitive science: that, beyond simple habituation, memory and learning require neurons.
In 2016, this tantalizing possibility spread like wildfire across the media, when ecologist Monica Gagliano and her team published findings of pea plants learning a simple association between a light and air movement. The results have been hotly contested and nobody since has been able to replicate them.
Now, Calvo and the MINT lab’s new project, “Behavioral Evidence for Plant Consciousness,” aims to revisit the question with more rigorous, plant-centered methods. They plan to go further still, by testing whether pea plants can meet the far more demanding criterion of trace conditioning.
“Might embodied AIs, combined with living systems, become meaning-makers with agency?”
Adapting such protocols for plants is no small task. What counts as a meaningful stimulus for a pea plant? How do you detect the response to the conditioning? Likewise, how would you do so for an artificial system, for that matter? Calvo’s team is pouring significant effort into answering these questions in order to design experiments that test what has so far been untestable.
Psychologist James Gibson’s theory of affordances, an alternative to traditional cognitive frameworks, is a valuable tool here. Rather than viewing perception as a process of constructing mental models — which is hard to envisage a pea plant doing — affordance theory sees organisms as directly perceiving the possibilities offered by their environments.
A tree trunk might afford a potential meal to a woodpecker, a permanent vertical support for a vine or a nesting site for a beetle. This perspective is particularly useful when applied to systems unlike ourselves: plants, microbes or biohybrid machines. Assessing what their environments afford them — and how they act upon those affordances — gives us the conceptual scaffolding for designing experiments with minds that do not resemble ours at all.
A Radical TaskSome day in the future, if a cellular-synthetic hybrid were to develop internal awareness — like a Frankenstein’s monster stitched together from organismal and silicone parts, then electro-shocked into animation — it would be far more like us than the chatbots many of us are currently forming dependent relationships with.
Ironically, though, its agency would make it seem infinitely more alien. Along with the plants and microorganisms with which we share the fundamental autopoietic, agential, meaning-making nature of being alive, such a being would inhabit its own world, generated by its peculiar physicality, senses and needs. We’re a long way from being able to understand what this really means.
Consciousness may or may not be restricted to carbon-based life. Synthetic minds may or may not emerge. But in imagining this future, especially given our present complex relationship with AI, we can see more clearly the limitations in our understanding of other organisms’ minds — and what we might gain by learning to study them properly.
Experimental design must become an exercise in radical empathy. The researchers who map the affordances that shape each being’s interactions with the world, and who adapt our scientific paradigms to accommodate them, are explorers into unknown territories. They are reworking brittle frameworks of cognitive science to handle phenomena they were never built to explain.
Whether plants can learn through trace conditioning or anthrobots ever gain sentience, this exploration will reshape the ethics of our interactions with other beings. It will help us to step outside our skull-bound awareness and into the radically different worlds that exist all around us — making our own far stranger and larger than we have imagined.
Comments (0)