Can physicalism and functionalism be falsified?
Published May 25, 2025
Short version
I wrote this because I'm interested in how we reason about consciousness— I'm skeptical that we’ll be able to falsify physicalism or functionalism. I’m not saying that either theory is true or false, but rather that you wouldn’t know.
We reason about/test theories of consciousness in physicalist contexts without testing assumptions of physicalism because we use the knowledge of our own consciousness to assume the consciousness of other humans.
It’s possible, that like our known physical laws, whatever physical process which produces consciousness can be simulated, making physicalism unfalsifiable— meaning that there’s no experiment that can determine whether it’s true or false, and taking functionalism with it. An implication of this outcome is that we won’t gain certainty on whether digital systems are conscious.
Longer version
Outline
Physicalism and functionalism
How do we reason about consciousness?
Reasoning about physicalist theories of consciousness
Reasoning about functionalist theories of consciousness
Physicalism and functionalism
Physicalism and functionalism are two main camps in philosophy of mind. Physicalists believe consciousness arises from strictly physical phenomena, e.g. properties of forces, fields, matter. In principle, physicalism doesn’t claim that consciousness can only arise from biological systems, but most physicalists likely think that biological entities are the only conscious entities we know of.
Functionalists believe consciousness arises from the causal structure of a system — consciousness is a matter of information or computation. Functionalists don’t dispute that consciousness can be implemented in biological brains, but the "biological" or "brains" aren't the necessary pieces: consciousness could also arise from running the right software program in a digital computer.
As AI systems, like chatbots, robots, girlfriends, display more human-like behaviors, e.g. emotions, goals and planning, preference, more people suspect that these systems may be conscious and thus morally relevant. The functionalist perspective largely gives this possibility technical merit. Over time, physicalist positions will likely increasingly contribute to speculations about machine consciousness e.g. cases of brain-inspired hardware and embodied robots.
How do we reason about consciousness?
Our ability to reason about consciousness arises from the fact that we think we are conscious. We start from that truth claim and then move tentatively outward — most people assume they are conscious, and the self-knowledge of their subjective experience leads to their assumption that other people are conscious too. Other people look like me, have the same general neuroanatomy, behave in many of the same ways, and our brains light up similarly when studied. This line of reasoning extends to evaluating the potential consciousness of animals. We’re generally confident that gorillas are conscious because they sort of look and act like us, but the verdict on the consciousness of ants or bacteria is less ubiquitous.
In simple terms, this is how the science of consciousness works. Doctors and scientists who study topics like perception, anesthesia, or disorders of consciousness operate on the assumption that other humans are conscious and then compare similarities between patients and the literature using measures like neuroanatomy, brain data, behavior.
Reasoning about physicalist theories of consciousness
Because we assume our own consciousness, and by extension other humans', we can falsify theories of consciousness in “physicalist” settings without having to falsify physicalism itself.
Taking our own consciousness as true and using that to reason about the consciousness of other humans and animals is not claiming that physicalism is true. I can do this and still suspect that digital computers could be conscious— there may be a physical principle that describes consciousness in human brains and a corresponding software implementation.
Some counterargue that it’s not fair for us to assume we are conscious or to assume the consciousness of other humans. I don’t really care about this argument.
If we identify some physical process that is necessary for consciousness which cannot be simulated, then physicalism could be validated and functionalism could be falsified. This is Roger Penrose’s whole bit — if you’re familiar with his and Stuart Hameroff’s microtubules. For example, If we find that X quantum coherence in microtubules is necessary for experience, and X can never be simulated on digital computers, then simulating a brain digitally won’t generate consciousness (this would be the easy way out).
But, it’s possible, like our known physical laws, that whatever physical process which produces consciousness can be simulated, making physicalism unfalsifiable— meaning that there’s no experiment that can determine whether it’s true or false, and taking functionalism with it—
Reasoning about functionalist theories of consciousness
In the case where you can simulate the physical process that causes consciousness, I’d claim you cannot falsify functionalism nor do you have scientific merit to falsify theories of consciousness in a functionalist context. I’m not saying in this case functionalism is true or false, I’m saying that you wouldn’t know.
For example, if you have an algorithm that 100% accurately predicts whether someone is awake based on brain data, and you apply that to a simulated brain, you’ll only learn whether your algorithm can accurately predict the correspondence between some input and the output, and it won’t challenge any assumptions of functionalism.
Maybe we'll find a physical fingerprint of consciousness that cannot be simulated, but if not, I'm skeptical that we can gain certainty in settings where we apply functionalist theories of consciousness, like digital consciousness, because I think it's fairly probable that functionalism can't be falsified.