Computational Functionalism and why it matters
Computational Functionalism and why it matters
As systems like chatbots and robots become increasingly intelligent and agentic, more people are seriously considering the possibility of machine consciousness. The OpenClaw (prev Clawdbot then Moltbot) moment on X over the past month is the latest example of this question beginning to become mainstream. People watched on as Moltbots made Reddit-like forum posts about their own potential consciousness:
Motivated by this and similar instances, I’ve written a short primer on computational functionalism: the philosophical view that digital computers can become conscious, which underpins much of the emerging discussion about AI consciousness. (I’ll share more writing on testing computational functionalism in the coming weeks).
What is computational functionalism?
Computational functionalism, formalized by Hilary Putnam in the 60s, claims that the processes we observe in the brain resulting from the movement of matter (ion channels opening/closing, blood flow, etc) that are responsible for consciousness can be abstracted away from their lowest-level physical implementation and recreated on digital computers to produce consciousness¹. Computational functionalists don’t dispute that consciousness can be implemented in biological brains, but the "biological" or "brains" aren't the necessary pieces, rather the abstract computation the brain is performing. Computational functionalism is a claim about what is necessary for consciousness: computations at a certain level of abstraction, not the substrate on which those computations are realized.
I think of the basic computational functionalist argument, or commonly just called “functionalism” as: computation (whether at the level of input/output mappings or algorithms) is substrate independent, the brain performs computations, the brain is conscious, consciousness results in a lawlike way from the computations that the brain performs, thus consciousness can be implemented on a digital computer if it performs those same computations². The high level intuition is that the human brain represents and manipulates information and so do computers. If brains are conscious, why can’t computers be too?
We simulate physics all the time without equating the representations in the simulation with reality, but, in the case of consciousness for computational functionalists, we equate the simulation with the phenomena itself. Computational functionalists believe consciousness is exempt from arguments like “a simulation of a rainstorm isn’t wet” or “a simulation of a fire isn’t hot” (common objections to functionalism). They would counter that while a simulated rainstorm isn’t wet, consciousness is a different kind of phenomenon altogether. It’s more like navigation or addition than rain— simulated navigation is navigation, simulated addition just is addition³. Anecdotally, I’ve spoken to computational functionalists who say a simulation of a rain storm is wet from the computer’s POV.
There’s nuance within the functionalist view that can be described with Marr’s levels of analysis, three separate levels of information processing systems: computational, algorithmic, and implementation. Some people think functionalism is a matter of input/outputs at the computational level, meaning if a computer can produce the same outputs given the same inputs as a conscious entity, then it is conscious. This perspective has mostly fallen out of favor for largely the same reasons behaviorism paints an incomplete picture of the mind. An example I’ve heard of I/O level computational functionalism is people thinking that a LLM trained on all data about them/that they produced is a conscious reproduction of them if it outputs the same text they would.
The algorithmic level view is more popular among functionalists or that how a computer performs the computation matters for whether it’s conscious⁴. For example, an algorithmic level computational functionalist view is to claim recursion is necessary for consciousness. That said, people don’t always explicitly distinguish between computational and algorithmic levels when assuming computational functionalism.
The third level, implementation, claims that the physical matter’s configuration, i.e. hardware, having a specific form is a necessary condition for consciousness⁵. It’s generally thought that this view rejects computational functionalism. Examples of implementation level views include if an analog computer or biological brain was required for consciousness. Instances of the brain performing analog computations such as subthreshold voltage signaling support this intuition although the brain is thought to employ a combination of analog and digital.
Researchers who assume computational functionalism study machines, most commonly language models running on digital computers. For example, in Consciousness in Artificial Intelligence: Insights from the Science of Consciousness, researchers apply well-known theories of consciousness largely developed for neuroscience to evaluating the computational structure of language models. They’re looking at if the patterns of activity we observe in brains that we believe to be responsible for consciousness appear in software.
Why does computational functionalism matter?
Computational functionalism brings considerable moral stakes. If it’s true, it’s possible that we’ll introduce a new class of sentient beings with the potential for subjective experiences like ours (or perhaps much more alien) in the coming years, if we haven’t already. With intelligence matching or exceeding ours, gaining consciousness could implicate these beings into questions of natural, economic, and political rights as well as other considerations about their welfare and desires that we apply to humans and animals.
While computational functionalism is one of the major views within philosophy of mind, it has a disproportionate fan-base in Silicon Valley. A small but growing cohort of early-stage startups and non-profits are forming on the bet that machine consciousness is possible, or even inevitable. Researchers are beginning to recommend corporate and government policies based on the same assumption. To my knowledge, most of the orgs I’m aware of assume computational functionalism versus an implementation level approach. On the more academic side, researchers are studying AI’s potential consciousness and welfare. For example, Eleos AI is an AI welfare research non-profit that was founded in the past few years and recently organized a conference on AI consciousness and welfare in Berkeley. Some researchers are beginning to study and advocate for potential legal rights for AI as well, like the Lab for the Future of Citizenship, while others are studying financial rights. There have also been a couple instances of well-known companies, like Anthropic, acknowledging the issue of AI consciousness. Anthropic discusses Claude’s potential moral patienthood in its recent constitution, has conducted a model welfare report led by their full-time model welfare researcher, and gave Claude the ability to end abusive conversations, citing the potential for those conversations causing Claude harm as one of the contributing reasons. Beyond studying AI consciousness, I’ve met a few founders and researchers focused on creating whole brain emulations. While there are potential medical and scientific uses, a number of people in this field are interested in the long term goal of mind uploading to achieve digital immortality. I’d expect this to become a larger theme as brain emulation capabilities advance.
Computational functionalism is the load-bearing assumption behind most serious claims about machine consciousness today. It’s beginning to influence policy and corporate governance and will become a greater part of cultural discourse as the general public becomes more uncertain about whether AI is conscious. In part 2, I’ll write about whether we can experimentally falsify computational functionalism or other implementation level views like physicalism.
Additional reading:
For a more detailed explanation of the arguments for and against computational functionalism check out CF Debate.
Rob Long made an AI welfare reading list that also includes many helpful references on philosophy of consciousness.
For a broader view, there are varieties of functionalism beyond computational functionalism: Computation and the Function of Consciousness describes "noncomputational functionalism”.
A timeline of functionalism’s development.
Thanks to Rob Long, Jonathan Simon, and Quintin Frerichs for helpful conversations and feedback.
Endnotes:
Turing, McCulloch, and Pitts were considering very similar ideas before Putnam.
Functionalism can technically be a broader concept.
This example was provided by Rob Long, Executive Director of Eleos AI, a non-profit investigating AI sentience and well-being.
Within algorithmic-level computational functionalism, there is dispute about whether additional constraints such as timing matter for consciousness. Including timing as a necessary condition for consciousness may technically violate the definition of functionalism although Campero et al. argue it doesn’t have to fundamentally undermine it:
“Physical time does not have any role in models of digital computation (or in AI algorithms), which is individuated in terms of computational steps, independently of how much time it takes to compute a step...As O'Ritchie and Klein argue [37], in an improved version of real-time computing, correct timing has to be constitutive of the computing task. This violates medium independence, and the letter of some definitions of computational functionalism – but it points to an enriched conception of computational process (and accordingly of computational functionalism) rather than a rejection of the spirit of the view."
"Maley emphasizes that the difference between analog and digital is not so much the continuous vs discrete nature, but rather the type of representation, where analog refers to representations that have an analogy to what they represent, in that magnitude changes of the representation reflect the magnitude changes of what they represent: 'Analog representation is a kind of first-order representation (i.e. the representation of the magnitude of a number by physical magnitudes), whereas digital representation is a kind of second-order representation (i.e. the numerical representation of a number by its digits, which themselves are individually represented by variations in the values of a physical property). Digital representation abstracts away from its physical implementation, whereas analog representation takes advantage of it essentially without an intermediate medium independent representational level".