Can physicalism and functionalism be falsified? 


I wrote this because I'm interested in how we reason about consciousness— I'm skeptical that we’ll be able to falsify physicalism or computational functionalism. This doesn’t mean that either theory is true or false, but rather that it’s not a resolvable question. 


We reason about/test theories of consciousness in physicalist contexts without testing assumptions of physicalism because we use the knowledge of our own consciousness to assume the consciousness of other humans. 


It’s possible, that like the laws of classical physics, whatever physical process which produces consciousness can be simulated, making physicalism unfalsifiable— meaning that there’s no experiment that can determine whether it’s true or false, and taking computational functionalism with it. An implication of this outcome is that we won’t gain certainty on whether digital systems are conscious. 


Outline

Physicalism and computational functionalism
Physicalism and functionalism are two main camps in philosophy of mind. Physicalists believe consciousness arises from strictly physical phenomena, e.g. properties of forces, fields, matter. In principle, physicalism doesn’t claim that consciousness can only arise from biological systems, but most physicalists likely think that biological entities are the only conscious entities we know of¹.

Computational functionalism, formalized by Hilary Putnam in the 60s, claims that the processes we observe in the brain resulting from the movement of matter (ion channels opening/closing, blood flow, etc) that are responsible for consciousness can be abstracted away from their lowest-level physical implementation and recreated on digital computers to produce consciousness. Computational functionalists don’t dispute that consciousness can be implemented in biological brains, but the "biological" or "brains" aren't the necessary pieces: consciousness could also arise from running the right software program in a digital computer. 


As AI systems, like chatbots, robots, girlfriends, display more human-like behaviors, e.g. emotions, goals and planning, preference, more people suspect that these systems may be conscious and thus morally relevant. The computational functionalist perspective largely gives this possibility technical merit. Over time, physicalist positions will likely increasingly contribute to speculations about machine consciousness e.g. cases of brain-inspired hardware and embodied robots. 


How do we reason about consciousness? 

Our ability to reason about consciousness arises from the fact that we think we are conscious. We start from that truth claim and then move tentatively outward — most people assume they are conscious, and the self-knowledge of their subjective experience leads to their assumption that other people are conscious too. Other people look like me, have the same general neuroanatomy, behave in many of the same ways, and our brains light up similarly when studied. This line of reasoning extends to evaluating the potential consciousness of animals. We’re generally confident that gorillas are conscious because they sort of look and act like us, but the verdict on the consciousness of ants or bacteria is less ubiquitous. 


In simple terms, this is how the science of consciousness works. Doctors and scientists who study topics like perception, anesthesia, or disorders of consciousness operate on the assumption that other humans are conscious and then compare similarities between patients and the literature using measures like neuroanatomy, brain data, behavior. 


Reasoning about physicalist theories of consciousness 

Because we assume our own consciousness, and by extension other humans', we can falsify theories of consciousness in “physicalist” settings without having to falsify physicalism itself. 


Taking our own consciousness as true and using that to reason about the consciousness of other humans and animals is not claiming that physicalism is true. I can do this and still suspect that digital computers could be conscious— there may be a physical principle that describes consciousness in human brains and a corresponding computational implementation².


If we identify some physical process that is necessary for consciousness which cannot be simulated, then physicalism could be validated and functionalism could be falsified. This is Roger Penrose’s whole bit — if you’re familiar with his and Stuart Hameroff’s microtubules. For example, If we find that X quantum coherence in microtubules is necessary for experience, and X can never be simulated on digital computers, then simulating a brain digitally won’t generate consciousness (this would be the easy way out).


But, it’s possible, like our known classical physical laws, that whatever physical process which produces consciousness can be simulated, making physicalism unfalsifiable— meaning that there’s no experiment that can determine whether it’s true or false, and taking functionalism with it


Reasoning about functionalist theories of consciousness 

In the case where you can simulate the physical process that causes consciousness, I’d claim you cannot falsify functionalism nor do you have scientific merit to falsify theories of consciousness in a functionalist context. I’m not saying in this case functionalism is true or false, but rather that you wouldn’t know. 


For example, if you have an algorithm that 100% accurately predicts whether someone is awake based on brain data, and you apply that to a simulated brain, you’ll only learn whether your algorithm can accurately predict the correspondence between some input and the output, and it won’t challenge any assumptions of functionalism. Or if you find that recursive neural activity is necessary for wakefulness in biological brains and then locate recursive neural activity in a simulation of an awake brain, you will show your simulation is accurate, not that functionalism is true.


Maybe we'll find a physical fingerprint of consciousness that cannot be simulated, but if not, I'm skeptical that we can gain certainty in settings where we apply functionalist theories of consciousness, like digital consciousness, because I think it's fairly probable that functionalism can't be falsified. 


What about the neural replacement thought experiment? (bonus question) 

The neural replacement thought experiment proposes gradually replacing biological neurons in a brain with functionally identical silicon neurons. During this process, the subject is asked to report on their experience. This thought experiment is often used to argue in favor of computational functionalism and the possibility of digital consciousness. David Chalmers argues that if the subject's self-report abilities remained intact, it’s implausible that they could have a dramatically diminished experience while continuing to report otherwise. For example, imagine a subject was watching a film and describing what they’re seeing, hearing, and feeling as their neurons are gradually replaced. Chalmers finds it hard to imagine that the subject could continue to describe the richness of the visual scene, the varying sounds, and the emotions they experience all the while their actual conscious experience fades. He concludes that experience must be preserved, and functional organization is sufficient for consciousness—validating computational functionalism.

This experiment would be incredibly difficult to actually carry out. It was introduced to make a point, but one of the reasons people are interested in it is  because they think it poses a potential path to falsifying computational functionalism and physicalism. That said, I’m not very convinced by this argument. If the replaced neurons are functionally identical, I don't place high trust in the subject's self-report. The silicon neurons would be sending the same information to other neurons as the biological ones would. For example, neurons that respond to variations in some property of an object will still produce the same activity whether they are biological or silicon. So while it’s unsettling and counterintuitive, I think it’s plausible that the subject could continue to accurately self-report even as the experience fades. Again, this is not a claim that computational functionalism is false, but that it may not be falsifiable. 


Eric Schwitzgebel raises a similar point in An Objection to Chalmers's Fading Qualia Argument: “Whatever cognitive processes subserve the introspective reporting are going to generate the same signals -- including misleading signals, if experience is absent -- as they would in the case where experience is present and accurately reported. Thus, unreliability would simply be what we should expect.”³.


Thanks to Jack Lindsey for helpful conversations and feedbacxk. 


Endnotes