Challenges to Functionalism Explained
Challenges to Functionalism Explained
FOUR CHALLENGES TO
FUNCTIONALISM
that artificial intelligence has advanced to the point where a program can
be written which will allow an android with a ‘brain’ consisting o f a com
puter running the program to behave actually and counterfactually much
as a normal human does. It does not matter for the example how this pro
gramming is done; to avoid confusion about the nature o f the program
(which we will discuss in a later example), let us suppose that the pro
gram mimics the operation o f a human brain at a neuron by neuron level.
Neurons are essentially ‘input-output’ devices made from organic matter,
the overall input—output characteristics o f the brain being determined by
how the primitive neuronal devices are assembled. Hence, this supposition
amounts to having the program reflect precisely the input-output nature
o f each neuron and how they are connected one to another.
The next step in the process o f constructing the example is to note
that it won’t matter, or anyway can hardly matter from a functionalist
perspective, if the computer running this program is in fact outside the
android’s body, connected by a two-way radio link to it. The final step
gives us the China Brain. Suppose that instead o f the program being run
on an external computer made o f silicon chips, the entire population of
China is enlisted to run the simulation. As the program mimics the way
the brain operates at the neuronal level, this can be done by assigning each
Chinese citizen the job o f just one neuron. They have, let’s suppose, the
kind o f phones that tell you what number has called you. When certain
numbers, or combinations o f numbers, ring in, they have to dial specified
other numbers. Each citizen is given a precise set o f instructions about
what to do that ensures that what each does exactly models what their
assigned neuron does, and the inputs to and outputs from their phones
are connected up so as to run the program. Also, the initial inputs to the
China brain come from the environment in much the same way as the inputs
to us do, and the final outputs go to the limbs and head o f the android
via the radio link in such a way that its actual and counterfactual beha
viour is much as ours is. Thus, the android will behave in the various
situations that confront it very much as we do, despite the fact that the
processing o f the environmental inputs into final behavioural outputs goes
via a highly organized set o f Chinese citizens rather than a brain.
This is certainly not a realistic fantasy. The population o f China is not
large enough; the whole process could never take place fast enough; the
citizens would get bored and careless; and anyway the program used
to construct the example does not exist and never will (working at the
neuronal level is ridiculously fine-grained). All the same it does seem clearly
intelligible, and if it is intelligible, it is fair to ask for an answer to the
question whether the system consisting o f the robot plus the population
o f China in the imagined case has mental states like ours. Many have a strong
Four C hallenges to F unctionalism 109
intuition that it does not. If they are right, functionalism o f just about
any variety must be false. For the system is functionally very like us. Not
only is it like us in all the functional roles seen as crucial by the common-
sense functionalist, it is like us in just about every functional respect.
Functionally, it is us; the difference lies in the dramatic difference in how
die functional roles are realized, and that difference counts for nothing
as far as mental nature is concerned according to functionalists.
We think, however, that the functionalist can rea
sonably deny the intuition. The source o f the intui
Denying the intuition
tion that the system consisting o f robot plus China
brain lacks mental states like ours seems to be the fact
that it would be so very much bigger than we are. We cannot imagine
‘seeing’ it as a cohesive parcel o f matter. We cannot see, that is to say,
the forest for the trees. A highly intelligent microbe-sized being moving
through our flesh and blood brains might have the same problem. It
would see a whole mass o f widely spaced entities interacting with each
other in a way that made no sense to it, that formed no intelligible over
all pattern from its perspective. The philosophers among these tiny beings
might maintain with some vigour that there could be no intelligence
here. All that is happening is an inscrutable sending back and forth o f
simple signals. They would be wrong. We think that the functionalist
can fairly say that those who deny mentality in the China brain example
are making the same mistake.
Before we leave the China brain example, we
Consciousness
should note two important points about its role in the
literature. First, it is sometimes directed simply to the
question o f whether functionalism can account for consciousness. In this
manifestation it is granted that the China brain has beliefs and desires
(after all, the robot will move in various ways in response to the environ
ment and thereby make changes to it o f just the kind we associate with
purposive, informed behaviour), but it is insisted that it is absurd to hold
that it feels anything. We discuss the difficult question o f feeling and
consciousness in the next chapter. Our concern in this chapter will be
restricted to challenges for functionalism about mental states like belief
and desire, and mental traits like being intelligent.
Secondly, sometimes the example is given in a
Connection to the
version that omits the robot. But then the population
environment
o f China is emulating, in some purely abstract way,
the program in someone’s brain with no obvious right
way to connect the overall inputs and outputs with the environment.
The case becomes essentially the same as the one we discussed when we
considered the charge o f excessive liberalism against certain machine
110 R ivals and O bjections
the robot and the radio. Lin, «'hose robot body is entering Kowloon,
might believe she was entering Kowloon. Tex, in a laboratory in Dallas,
might not even know that Kowloon exists, let alone believe he or anyone
else is entering it.
We should say something quickly about a response a Searle-like figure
might make to replies like this. He might think that it depends crucially on
there being a system bigger than Tex that does much o f the work. This,
he thinks, is what makes it seem plausible that the system is distinct from
Tex, and can thus understand something that Tex doesn’t. He might
ask us instead to imagine that Tex memorizes the book, and stores all
the changing data in his head. We are supposed to think that in this case
there is no plausible distinctness between Tex and Lin, so if one tails to
understand Chinese, so does the other. Tex doesn’t understand Chinese,
so nor does Lin.
We do not think that this variation makes an important difference. What
we would have here is two entities who' share a brain. The idea that dis
tinct individuals might share a brain is not enormously different from
that which we have grown used to in discussions o f multiple personality
disorder. There is some difference here- Lin relies on Tex to do the cal
culations that constitute her mental states (so if he gets bored or ill, it’s
very bad news for her indeed).
It may seem puzzling that Tex does all these ~ '
. . . , - , . . . . . . A computer analogy
calculations without knowing what it is he is doing. ________________
But in fact something like this is commonplace in
computer science. When one computer does calculations that emulate
the behaviour o f a different machine, the emulated computer is said to
be a virtual machine. You may have seen an Apple Macintosh computer
which has a window which has the look and feel o f a machine running
Microsoft Windows. In fact, this is done by the Macintosh operating sys
tem directing that calculations be done at the binary level that emulate
the behaviour o f the Intel chip on which Windows runs. I f you ask the
Macintosh operating system what menus appear in its windows, it will
be able to tell you. But if you ask it what menus appear in the Microsoft
Windows lookalike window that it is supporting by emulating the Intel
chip, it won’t be able to tell you. It doesn’t have information about that
process at that level o f abstraction. I f you interrogate Windows, however,
it can tell you about its windows. This is roughly analogous to asking
Tex about Kowloon directly, and drawing a blank, but getting an answer
when you interrogate Lin.
In sum, the Chinese room example starts out as one where both intui
tion and any plausible functionalism agree that there is no understanding
of Chinese. We can add to the example to get one where plausible versions
114 R ivals and O bjections
Blockhead
We noted in chapter 5 that the most popular version
Input-output
o f empirical functionalism is exposed to the charge
functionalism
____ o f chauvinism. It insists on an excessive degree of
internal similarity to us before something counts as
having a mind; beings might fail to have minds by virtue o f having
internal processors which are better than ours! Would it be right to
take the extra step o f holding that all that matters for having a mind is
being such as to ensure the right connexion between external inputs
and outputs? Something is an amplifier if it is such as to secure the right
relationship between inputs and correspondingly bigger outputs, no
matter how the job is done internally. What is done, not how it is done,
is what counts. Should we say the same about the mind? Such a position
can insist on specifying the inputs and outputs in arm’s length terms,
as is done in common-sense functionalism, and that what goes on inside
matters to the extent that the job o f appropriately mediating between
the environmental inputs and behavioural outputs must be done by what
is inside. But that would be the extent o f its constraints. Such a view
might be called in p u t-o u tp u t or stim ulus-response functionalism . It
takes on board what is right about behaviourism - that behaviour in
situations is crucial - but remedies at least part o f what is wrong with it.
Mental states are internal, causally efficacious states, pace behaviourism,
but internal states that can be characterized fully as far as their psycho
logical nature is concerned in terms o f the behaviour that they do and
would typically produce, or do or would produce if linked up in some
natural way to the body. Input-output functionalism can be distinguished
from supervenient behaviourism by the fact that the input-output
functionalist insists that (most) o f the states causally responsible for
the behavioural profile must be internal. Suppose that Jane’s normal-
seeming behavioural profile is caused by puppeteers acting at a distance.
The supervenient behaviourist might think she had mental states like ours;
the input-output functionalist would not.
Input-output functionalism is false. A now famous example due to
Ned Block shows that the way the job is done does matter. There are
substantial internal constraints on being a thinker. The remainder of
this chapter is concerned with describing his example - the Blockhead
Four C hallenges to F unctionalism 115
Figure 7 .1 Look-up tree for the start of a chess game. The boxes
represent the possible opening moves by White; the circles the
responses to each nominated by experts.
116 R ivals and O bjections
changes, and below each box would be the circle representing the best
response for that move, given that rule change, according to the experts.
This would make an already huge tree even bigger but does not introduce
any new point o f principle. In practice, o f course, there is an insuperable
problem with this plan for playing good chess. At each stage o f a game of
chess there are a large number o f legal moves, and for each o f these legal
moves there are many legal responses. Writing out the look-up tree would
in consequence involve what is known as a combinatorial explosion. Giving
more than a line or two o f the tree would require more distinct states
than there are particles in the universe.
making clear sense o f the possibility - Twin Earth - where water is not
water}', and what is watery is not water. In the case o f Blockhead we test
the hypothesis that being behaviourally exactly alike someone intelligent
is sufficient for being intelligent, and come up with the answer that it is
not, by describing a possibility we understand and comprehend (while
realizing that it is in practice quite impossible) - a Blockhead twin o f
an intelligent Jones - where what is behaviourally exactly alike someone
intelligent has no intelligence (and indeed no thoughts) at all.
Finally, you might object that though it is missing the point to
complain that the Blockhead example is impossible either in practice or
perhaps even nomologically, it is right to be suspicious o f intuitions
about cases that far removed from what is possible in any but the most
abstract sense. Perhaps, in particular, we should resist the intuition that
Jones’s Blockhead twin lacks intelligence. The trouble with this objection
is that Blockhead is so like all the cases where we feel that someone lacks
understanding: someone who cannot play chess except by asking an expert
what to do at every stage is someone who does not understand the game,
and someone who cannot give you the square root o f a number other
than by looking up a table o f square roots is someone who does not
fully understand what a square root is. The intuition that Blockhead lacks
intelligence is simply a natural extension o f what we learn from these
simple and familiar cases. Moreover we can give a reason why Blockhead
lacks understanding and intelligence - a reason that, we will argue, makes
sense o f our strong intuition that Blockhead is deficient, and so explains
and justifies the intuition.
beliefs and sensory' data. We all ought to believe that the Earth is round
(or oblate, to be more precise) because that is the right belief to have
caused in us by our pasts. Likewise, being intelligent centrally involves
having trains o f thought that evolve in the right way. Later thoughts have
to be caused in the right way by earlier ones. I f a brain scientist inserts
a probe into your brain that causes the crucial thought that enables you
to announce the proof o f Goldbach’s Conjecture, this is not a sign of
your intelligence or rationality. It is either a fluke or a sign o f the intel
ligence o f the brain scientist, depending on the causal origins o f her action
in inserting the probe. Moreover, it is part o f being a belief o f a certain
kind that it tends to have certain results. Part o f what makes something
the belief that if P then Q , is that combined with the belief that P, it
tends to cause the belief that Q . (We enlarge on the importance o f tend
ing to evolve rationally to being a belief when we discuss the intentional
stance in chapter 9.)
Simple input-output devices exhibit massive causal dependencies
between early and late stages. The state o f a sundial or an amplifier or a
carburettor that is responsible for its capacity to generate the appropri
ate outputs on Monday is typically a major causal factor in its capacity
to do the job on Tuesday. The situation with much more complex struc
tures like human beings is correspondingly more complex. How we
respond to stimuli on Tuesday depends on all sorts o f factors in addition
to how we are on Monday, including what has impacted on us between
the two days and what we have thought about in the interim. This is part
o f what confers on us the flexibility o f response that makes us intelligent.
Nevertheless, causal dependencies between earlier and later thoughts are
crucial. It is just that how we respond in the future depends on a much
more diverse range o f factors than simply how we are in the past - what
we have thought about and what has happened to us in the interim also
enter the equation.
The trouble with devices that work by look-up tree is that they lack
the appropriate causal dependencies. The state that governs the responses
to inputs early on plays the wrong kind o f role in
causing the state that governs the responses later on.
Blockhead's causal
This is because, for the most part, the Blockhead is
peculiarity
static. It is mostly written down in advance, and the
only thing that varies is which node is active.
We will now explain the idea o f an active node. We will call the various
sets o f pre-recorded possible inputs together with appropriate outputs
nodes. At any given time, a Blockhead can be said to have a certain node
that is active. The active node is the one that will be searched until the
input that has been given to the Blockhead is found, and the pre-recorded
Four C hallenges to Functionalism 121
in the first recipe plays a role in what you do subsequently, the content
o f the second recipe is, we may suppose, quite independent o f the con
tent o f the first. But thinking is not like that; the content o f what we
think at a time typically depends in part on the content o f what we thought
at various earlier times in rich and complex ways, and that is crucial for
it to count as thought and as rational thought.
In sum, Blockhead’s input-output profile at any given time does not
depend in the right way on its input-output profiles at earlier times
for Blockhead to count as a thinker, or even as something displaying
rationality and intelligence. The input-output nature o f the node that
controls Blockhead’s behavioural response at time t is not caused by the
input-output nature o f what controls Blockhead’s behavioural response
at any earlier time t —n. The overall input-output nature at time t depends
on the past states only insofar as it is determined by which pre-existing
node is active. Figure 7.3 helps make the point. In the diagram, suppose
that the shaded area represents what actually happens. The point is that
as you progress through the shaded region you are not progressing
through nodes whose nature depends on the nature o f earlier states in
the shaded region, except in the minimal sense in which being active counts
as part o f their nature.
The argument is logically valid. So if all its premises are true, then so
is its conclusion. Any physicalist must, therefore, deny one or more o f its
premises. Exactly which premise they deny, however, varies according to
the kind o f physicalism involved.
Analytic functionalism needs to deny the first premise. For analytic
functionalism says that it is a matter o f the meaning o f mental state
terms that you have the relevant mental states whenever you have the
right functional roles being played. And if the relevant functional roles
are played actually by physical stuff, then those roles will be played in any
minimal physical duplicate o f the actual world. Suppose that an analytic
functionalist has worked through her theory o f mind, and knows what
roles have to be played for qualia to exist. She ought not be able to con
ceive o f zombies. It is a priori that zombies are impossible. For knowledge
o f the roles, together with knowledge that a physical set up which plays
them exists, logically entails that qualia exist. T o conceive o f zombies is to
conceive o f things that have what is sufficient for qualia (having the right
roles played), and yet lack qualia. And this is to conceive o f a straight
forward contradiction. In some good sense o f conceive, one cannot
conceive o f the a priori impossible.
This is a real problem for analytic functionalism, for it seems that we
really can conceive o f zombies - and yet the analytic functionalist says they
are ruled out by our grasp o f the meaning o f mental
state terms. It is intuitively fine to think it might be
Analytic functionalism
true that zombies are impossible, but perhaps only if
and ideal
conceivability
this is a substantial fact that does not simply fall out
o f the meaning o f mental state terms. Many think that
if zombies are impossible it does not seem to be a
merely semantic fact, but rather a metaphysical one. There are, however,
things that analytic functionalism might do to sweeten the pill. Sometimes
we think we can conceive o f something that is impossible. Perhaps you were
asked once, in maths class, to find out at what point a parabola crossed the
y axis. You took very seriously that it was at y = 1 and y = 2. You not only
conceived o f that possibility, but you thought it actually true. But after
some calculation you found, no, it was at y = 2 and y = 3. But o f course,
once we define a parabola by a quadratic equation, it is a logical necessity
that it intersects the y axis (or not) where it does. Your original concep
tion was incoherent: it was a logical impossibility. Yet you had it none the
less. Exactly what to say about this case is controversial. Everyone agrees,
however, that there is some important distinction between what you can
ideally conceive - that is, when all the logical and semantic truths are
before your mind and you are rational —and whatever is happening when
you ‘conceive’ o f the roots o f a quadratic equation being different from
Four C hallenges to Functionalism 125
what they, o f necessity, are. The disagreements are about vvliat is going on
in the unideal case, and we can set that aside. For perhaps the analytic
functionalist should say that they doubt that we can ideally conceive o f
zombies. If all the facts about the functional roles were before your mind,
and you could see how the physical states must play those roles, you could
not conceive o f zombies. Our apparent ability to conceiv e o f zombies is
on a par with imagining that mathematical truths are false.
This is a powerful reply, and one o f the authors is very attracted by
it. However, it remains a little mysterious how all the extra clarity and
ideal rationality are supposed to do their work. Certainly i f it follows from
the meaning o f mental state terms that zombies are impossible, then we
ideally can’t conceive o f them. But one might take a strong intuition about
their conceivability to be evidence that we have got the theory o f the
meaning o f mental state terms wrong. So for the reply to work, the extra
clarity that we have ideally will need to make it clearer that the analytic
functionalist theory o f mental state terms is right - which is a punt the
analytic functionalist must take. In addition it is hard to imagine exactly
what form this extra clarity would take. We sort o f know what the extra
clarity would come from in the mathematical case, but what the analogue
is in the qualia case is harder to see.
At first glance, empirical functionalism appears to
Empirical
be in a better position to address the zombie chal
functionalism and
lenge. For here the discovery that qualia are physical zombies
is in some sense a posteriori. So, the thought runs, we
might be able to conceive o f zombies, for it is not a
priori that they are impossible. They are impossible none the less, but
this is an a posteriori impossibility o f the kind we discuss in chapter 4.
Thus the empirical functionalist might tty to deny the second premise o f
the zombie argument. For they think we can conceive o f the impossible
- even ideally conceive o f the impossible. Thus conceivability is no guide
to impossibility, and the zom bie argum ent fails.
We think, however, that this tempting reply fails. It fails because it ignores
some subtle distinctions within empirical functionalism. On one kind of
empirical functionalism the view is coherent but the reply does not work.
On another kind, the reply seems to work - but the coherence o f the
empirical functionalism itself is problematic.
Many o f the versions o f functionalism that we iden
Reference-fixing
tify in the table at the end o f chapter 5 use something
- perhaps the folk roles - to pick out some mental
natures, and then rigidify on the internal features o f those entities. Now
it is impossible that something be a physical duplicate o f that internal nature
without possessing that internal nature, on the assumption that the internal
126 R ivals and O bjections
• In the actual world the qualia are the dualistic states; and all and only
the qualia in counterfactual worlds are the dualistic states.
Else
• In the actual world the qualia are the states that play the functional
roles, in other worlds qualia (if any) are the states that play these roles
in that world.
If this is the right analysis, and if dualism is false, then it will certainly be
right to say ‘zombies are impossible’. But we wouldn’t know that we were
entitled to say that a priori. For we know that we are only entitled to say
that zombies are impossible if dualism is false, and even physicalists should
give some credence to dualism in fact being true, however small. The idea
is that the zombie intuition comes from confusing two things. The first of
these is the thought that there is some chance that dualism is nue - and on
that supposition we would be right to claim that zombies were possible.
The second o f these is the straightforw ard possibility o f zombies.
128 R ivals and Objections
A n n o t a t e d R e a d in g
The Chinese nation example is presented in Ned Block, ‘Troubles with Function
alism’. John Searle’s Chinese room case has been very widely discussed (at times,
with some heat). Perhaps the best place to start is John Searle, ‘Minds, Brains,
and Programs’. A more informal presentation, combined with replies to the many
objections that have been raised, is his ‘Is the Brain’s Mind a Computer Program?’
Among the many replies he considers are those he christens the systems reply
and the robot reply. The systems reply is the first one we expounded. The reply
we eventually setded on is a combination of the systems and robot replies. A good
recent discussion is in chapter 6 of Jack Copeland, Artificial Intelligence. The
classic source for Blockhead is Ned Block, ‘Psychologism and Behaviourism’. Keith
Campbell’s Body and Mind provides a straightforward description of the zombie
argument (he uses imitation men instead of zombies). The term ‘zombie’ in this
context may have come via Robert Kirk, ‘Zombies versus Materialists’. Recent
interest in the zombie objection has been stimulated by David Chalmers, The
Conscious Mind. A fuller version of the reply we give in the final section can be
found in David Braddon-Mitchell, ‘Qualia and Analytic Conditionals’, and for a
similar approach see John Hawthorne, ‘Advice for Physicalists’.