Unity Bbs
Unity Bbs
Sciences
Herbert Gintis
May 6, 2006
Abstract
The various behavioral disciplines model human behavior in distinct and
incompatible ways. Yet, recent theoretical and empirical developments have
created the conditions for rendering coherent the areas of overlap of the various
behavioral disciplines. The analytical tools deployed in this task incorporate
core principles from several behavioral disciplines. The proposed framework
recognizes evolutionary theory, covering both genetic and cultural evolution,
as the integrating principle of behavioral science. Moreover, if decision the-
ory and game theory are broadened to encompass other-regarding preferences,
they become capable of modeling all aspects of decision making, including
those normally considered “psychological,” “sociological” or “anthropolog-
ical.” The mind as a decision-making organ then becomes the organizing
principle of psychology.
1 Introduction
1
Unifying the Behavioral Sciences
however, according the behavioral sciences the status of true sciences is less than
credible.
One of the great triumphs of Twentieth century science was the seamless inte-
gration of physics, chemistry, and astronomy, on the basis of a common model of
fundamental particles and the structure of space-time. Of course, gravity and the
other fundamental forces, which operate on extremely different energy scales, have
yet to be reconciled, and physicists are often criticized for their seemingly endless
generation of speculative models that might accomplish this reconciliation. But,
a similar dissatisfaction with analytical incongruence on the part of their practi-
tioners would serve the behavioral sciences well. This paper argues that we now
have the analytical and empirical bases to construct the framework for an integrated
behavioral science.
The behavioral sciences all include models of individual human behavior. These
models should be compatible. Indeed, there should be a common underlying model,
enriched in different ways to meet the particular needs of each discipline. We can-
not easily attain this goal at present, however, as the various behavioral disciplines
currently have incompatible models. Yet, recent theoretical and empirical devel-
opments have created the conditions for rendering coherent the areas of overlap
of the various behavioral disciplines. The analytical tools deployed in this task
incorporate core principles from several behavioral disciplines.3
The standard justification for the fragmentation of the behavioral disciplines
is that each has a model of human behavior well suited to its particular object of
study. While this is true, where these objects of study overlap, their models must
be compatible. In particular, psychology, economics, anthropology, biology, and
sociology should have concordant explanations of law-abiding behavior, charitable
giving, political corruption, and voting behavior, and other complex behaviors that
do not fit nicely within disciplinary boundaries. They do not.
This paper sketches a framework for the unification of the behavioral sciences.
Two major conceptual categories, evolution and game theory, cover ultimate and
proximate causality. Under each category are conceptual subcategories that relate
to overlapping interests of two or more behavioral disciplines. I will argue the
following points:
1. Evolutionary perspective: Evolutionary biology underlies all behavioral disci-
plines because Homo sapiens is an evolved species whose characteristics are the
2 The last serious attempt at developing an analytical framework for the unification of the behavioral
sciences was Parsons and Shils (1951). A more recent call for unity is Wilson (1998), which does not
supply the unifying principles.
3A core contribution of political science, the concept of power, is absent from economic theory, yet
interacts strongly with basic economic principles (Bowles and Gintis 2000). Lack of space prevents
me from expanding on this important theme.
2 May 6, 2006
Unifying the Behavioral Sciences
3 May 6, 2006
Unifying the Behavioral Sciences
2. Game theory: The analysis of living systems includes one concept that does
not occur in the non-living world, and is not analytically represented in the natural
sciences. This is the notion of a strategic interaction, in which the behavior of
individuals is derived by assuming that each is choosing a fitness-relevant response
to the actions of other individuals. The study of systems in which individuals
choose fitness-relevant responses and in which such responses evolve dynamically,
is called evolutionary game theory. Game theory provides a transdisciplinary con-
ceptual basis for analyzing choice in the presence of strategic interaction. However,
the classical game theoretic assumption that individuals are self-regarding must be
abandoned except in specific situations (e.g. anonymous market interactions), and
many characteristics that classical game theorists have considered logical impli-
cations from the principles of rational behavior, including the use of backward
induction, are in fact not implied by rationality. Reliance on classical game theory
has led economists and psychologists to mischaracterize many common human be-
haviors as irrational. Evolutionary game theory, whose equilibrium concept is that
of a stable stationary point of a dynamical system, must therefore replace classical
game theory, which erroneously favors subgame perfection and sequentiality as
equilibrium concepts.
2a. The brain as a decision making organ: In any organism with a central
nervous system, the brain evolved because centralized information processing
enabled enhanced decision making capacity, the fitness benefits thereof more
than offsetting its metabolic and other costs. Therefore, decision making must
be the central organizing principle of psychology. This is not to say that learning
(the focus of behavioral psychology) and information processing (the focus of
cognitive psychology) are not of supreme importance, but rather that principles
of learning and information processing only make sense in the context of the
decision making role of the brain.6
2b. The rational actor model: General evolutionary principles suggest that
individual decision making can be modeled as optimizing a preference function
subject to informational and material constraints. Natural selection ensures
that the content of preferences will reflect biological fitness, at least in the
environments in which preferences evolved. The principle of expected utility
extends this optimization to stochastic outcomes. The resulting model is called
the rational actor model in economics, but I will generally refer to this as the
beliefs, preferences, and constraints (BPC) model to avoid the often misleading
connotations attached to the term “rational.”7
Economics, biology and political science integrate game theory into the core of
their models of human behavior. By contrast, game theory widely evokes emotions
from laughter to hostility in the other behavioral disciplines. Certainly, if one rejects
4 May 6, 2006
Unifying the Behavioral Sciences
the BPC model (as these other disciplines characteristically do), game theory makes
no sense whatever. The standard critiques of game theory in these other disciplines
are indeed generally based on the sorts of arguments on which the critique of the
BPC model are based, to which we turn in section 9.
In addition to these conceptual tools, the behavioral sciences of course share
common access to the natural sciences, statistical and mathematical techniques,
computer modeling, and a common scientific method.
The above principles are certainly not exhaustive; the list is quite spare, and
will doubtless be expanded in the future. Note that I am not asserting that the above
principles are the most important in each behavioral discipline. Rather, I am saying
that they contribute to constructing a bridge across disciplines—a common model
of human behavior from which each discipline can branch off.
Accepting the above framework may entail substantive reworking of basic the-
ory in a particular discipline, but I expect that much research will be relatively
unaffected by this reworking. For example, a psychologist working on visual pro-
cessing, or an economist working on futures markets, or an anthropologist tracking
food-sharing practices across social groups, or a sociologist gauging the effect of
dual parenting on children’s educational attainment, might gain little from knowing
that a unified model underlay all the behavioral disciplines. But, I suggest that in
such critical areas as the relationship between corruption and economic growth,
community organization and substance abuse, taxation and public support for the
welfare state, and the dynamics of criminality, researchers in one discipline are
likely to benefit greatly from interacting with sister disciplines in developing valid
and useful models.
In what follows, I will expand on each of the above concepts, after which I
will address common objections to the beliefs, preferences, and constraints (BPC)
model and game theory.
2 Evolutionary perspective
5 May 6, 2006
Unifying the Behavioral Sciences
6 May 6, 2006
Unifying the Behavioral Sciences
For every constellation of sensory inputs, each decision taken by an organism gen-
erates a probability distribution over fitness outcomes, the expected value of which
8 The fact that psychology does not integrate the behavioral sciences is quite compatible, of course,
with the fact that what psychologists do is of great scientific value.
7 May 6, 2006
Unifying the Behavioral Sciences
is the fitness associated with that decision. Because fitness is a scalar variable (ba-
sically the expected number of offspring to reach reproductive maturity), for each
constellation of sensory inputs, each possible action the organism might take has a
specific fitness value; organisms whose decision mechanisms are optimized for this
environment will choose the available action that maximizes this fitness value.9 It
follows that, given the state of its sensory inputs, if an orgasm with an optimized
brain chooses action A over action B when both are available, and chooses action B
over action C when both are available, then it will also choose action A over action
C when both are available. This is called choice consistency.
The so-called rational actor model was developed in the twentieth century by
John von Neumann, Leonard Savage and many others. The model appears prima
facie to apply only when individuals can determine all the logical and mathematical
implications of the knowledge they possess. However, the model in fact depends
only on choice consistency and the assumption that individuals can trade off among
outcomes in the sense that for any finite set of outcomes A1 , . . . , An , if A1 is the
least preferred and An the most preferred outcome, then for any Ai , 1 ≤ i ≤ n there
is a probability pi , 0 ≤ pi ≤ 1 such that the individual is indifferent between Ai and
a lottery that pays A1 with probability pi and pays An with probability 1−pi (Kreps
1990). A lottery is a probability distribution over a finite set of monetary outcomes.
Clearly, these assumptions are often extremely plausible. When applicable, the
rational actor model’s choice consistency assumption strongly enhances explanatory
power, even in areas that have traditionally abjured the model (Coleman 1990,
Kollock 1997, Hechter and Kanazawa 1997).
In short, when preferences are consistent, they can be represented by a numerical
function, which we call the objective function, that individuals maximize subject to
their beliefs (including Bayesian probabilities) and the constraints they face.
Four caveats are in order. First, this analysis does not suggest that people
consciously maximize anything. Second, the model does not assume that individual
choices, even if they are self-referring (e.g., personal consumption) are always
welfare-enhancing. Third, preferences must be stable across time to be theoretically
useful, but preferences are ineluctably a function of such parameters as hunger,
fear, and recent social experience, and beliefs can change dramatically in response
to immediate sensory experience. Finally, the BPC model does not presume that
beliefs are correct or that they are updated correctly in the face of new evidence,
although Bayesian assumptions concerning updating can be made part of preference
consistency in elegant and compelling ways (Jaynes 2003).
9 This argument was presented verbally by Darwin (1872) and is implicit in the standard notion
of “survival of the fittest,” but formal proof is recent (Grafen 1999, 2000, 2002). The case with
frequency-dependent (non-additive genetic) fitness has yet to be formally demonstrated, but the in-
formal arguments are no less strong.
8 May 6, 2006
Unifying the Behavioral Sciences
which is the expected value theorem with utility function ψ(·). See also Cooper
(1987).
There are few reported failures of the expected utility theorem in non-humans,
and there are some compelling examples of its satisfaction (Real and Caraco 1986).
The difference between humans and other animals is that the latter are tested in
real life, or in elaborate simulations of real life, whereas humans are tested in
the laboratory under conditions differing radically from real life. Although it is
important to know how humans choose in such situations (see section 9.7), there
9 May 6, 2006
Unifying the Behavioral Sciences
is certainly no guarantee they will make the same choices in the real-life situation
that they make in the situation analytically generated to represent it. For example,
a heuristic that says “adopt choice behavior that appears to have benefitted others”
may lead to expected fitness or utility maximization even when individuals are
error-prone when evaluating stochastic alternatives in the laboratory.
In addition to the explanatory success of theories based on the rational actor
model, supporting evidence from contemporary neuroscience suggests that expected
utility maximization is not simply an “as if” story. In fact, the brain’s neural
circuitry makes choices by internally representing the payoffs of various alternatives
as neural firing rates, choosing a maximal such rate (Glimcher 2003, Dorris and
Bayer 2005). Neuroscientists increasingly find that an aggregate decision making
process in the brain synthesizes all available information into a single, unitary
value (Parker and Newsome 1998, Schall and Thompson 1999, Glimcher 2003).
Indeed, when animals are tested in a repeated trial setting with variable reward,
dopamine neurons appear to encode the difference between the reward that an
animal expected to receive and the reward that an animal actually received on a
particular trial (Schultz, Dayan and Montague 1997, Sutton and Barto 2000), an
evaluation mechanism that enhances the environmental sensitivity of the animal’s
decision making system. This error-prediction mechanism has the drawback of
seeking only local optima (Sugrue, Corrado and Newsome 2005). Montague and
Berns (2002) address this problem, showing that the obitofrontal cortex and striatum
contain mechanisms for more global predictions that include risk assessment and
discounting of future rewards. Their data suggest a decision making model that is
analogous to the famous Black-Scholes options pricing equation (Black and Scholes
1973).
Although the neuroscientific evidence supports the BPC model, it does not
support the traditional economic model of Homo economicus. For instance, recent
evidence supplies a neurological basis for hyperbolic discounting, and hence under-
mines the traditional belief in time consistent preferences. For instance, McClure,
Laibson, Loewenstein and Cohen (2004) showed that two separate systems are in-
volved in long- vs. short-term decisions. The lateral prefrontal cortex and posterior
parietal cortex are engaged in all intertemporal choices, while the paralimbic cortex
and related parts of the limbic system kick in only when immediate rewards are
available. Indeed, the relative engagement of the two systems is directly associated
with the subject’s relative favoring of long- over short-term reward.
The BPC model is the most powerful analytical tool of the behavioral sciences.
For most of its existence this model has been justified in terms of “revealed prefer-
ences,” rather than by the identification of neural processes that generate constrained
optimal outcomes. The neuroscience evidence suggests a firmer foundation for the
rational actor model.
10 May 6, 2006
Unifying the Behavioral Sciences
5 Gene-Culture Coevolution
The genome encodes information that is used both to construct a new organism, to
instruct the new organism how to transform sensory inputs into decision outputs
(i.e., to endow the new organism with a specific preference structure), and to trans-
mit this coded information virtually intact to the new organism. Because learning
about one’s environment may be costly and is error-prone, efficient information
transmission will ensure that the genome encode all aspects of the organism’s envi-
ronment that are constant, or that change only very slowly through time and space.
By contrast, environmental conditions that vary across generations and/or in the
course of the organism’s life history can be dealt with by providing the organism
with the capacity to learn, and hence phenotypically adapt to specific environmental
conditions.
There is an intermediate case that is not efficiently handled by either genetic
encoding or learning. When environmental conditions are positively but imper-
fectly correlated across generations, each generation acquires valuable information
through learning that it cannot transmit genetically to the succeeding generation,
because such information is not encoded in the germ line. In the context of such
environments, there is a fitness benefit to the transmission of information by means
other than the germ line concerning the current state of the environment. Such
epigenetic information is quite common (Jablonka and Lamb 1995), but achieves
its highest and most flexible form in cultural transmission in humans and to a lesser
extent, in primates and other animals (Bonner 1984, Richerson and Boyd 1998).
Cultural transmission takes the form of vertical (parents to children) horizontal
(peer to peer), and oblique (elder to younger), as in Cavalli-Sforza and Feldman
(1981), prestige (higher influencing lower status), as in Henrich and Gil-White
(2001), popularity-related as in Newman, Barabasi and Watts (2006), and even ran-
dom population-dynamic transmission, as in Shennan (1997) and Skibo and Bentley
(2003).
The parallel between cultural and biological evolution goes back to Huxley
(1955), Popper (1979), and James (1880).10 The idea of treating culture as a form
of epigenetic transmission was pioneered by Richard Dawkins, who coined the term
“meme” in The Selfish Gene (1976) to represent an integral unit of information that
could be transmitted phenotypically. There quickly followed several major contri-
butions to a biological approach to culture, all based on the notion that culture, like
genes, could evolve through replication (intergenerational transmission), mutation,
and selection (Lumsden and Wilson 1981, Cavalli-Sforza and Feldman 1982, Boyd
10 For a more extensive analysis of the parallels between cultural and genetic evolution, see Mesoudi,
Whiten and Laland (2006). I have borrowed heavily from that paper in this section.
11 May 6, 2006
Unifying the Behavioral Sciences
12 May 6, 2006
Unifying the Behavioral Sciences
13 May 6, 2006
Unifying the Behavioral Sciences
mans and are doubtless evolutionary adaptations (Schulkin 2000). The evolution
of the human prefrontal cortex is closely tied to the emergence of human morality
(Allman, Hakeem and Watson 2002). Patients with focal damage to one or more of
these areas exhibit a variety of antisocial behaviors, including sociopathy (Miller,
Darby, Benson, Cummings and Miller 1997) and the absence of embarrassment,
pride and regret (Beer, Heerey, Keltner, Skabini and Knight 2003, Camille 2004).
Because of the centrality of culture to the behavioral sciences, it is worth noting the
divergent use of the concept in distinct disciplines, and the sense in which it is used
here.
Anthropology, the discipline that is most sensitive to the vast array of cultural
groupings in human societies, treats culture as an expressive totality defining the
life space of individuals, including symbols, language, beliefs, rituals, and values.
By contrast, in biology culture is generally treated as information, in the form
of instrumental techniques and practices, such as those used in producing of neces-
sities, fabricating tools, waging war, defending territory, maintaining health, and
rearing children. We may include in this category “conventions” (e.g., standard
greetings, forms of dress, rules governing the division of labor, the regulation of
marriage, and rituals) that differ across groups and serve to coordinate group be-
havior, facilitate communication and maintain shared understandings. Similarly,
we may include transcendental beliefs (e.g., that sickness is caused by angering
the gods, that good deeds are rewarded in the afterlife) as a form of information.
A transcendental belief is the assertion of a state of affairs that has a truth value,
but one that believers either cannot or choose not to test personally (Atran 2004).
Cultural transmission in humans, in this view, is therefore a process of information
transmission, rendered possible by our uniquely prodigious cognitive capacities
(Tomasello, Carpenter, Call, Behne and Moll 2005).
The predisposition of a new member to accept the dominant cultural forms of
a group is called conformist transmission (Boyd and Richerson 1985). Conformist
transmission may be fitness enhancing because, if an individual must determine the
most effective of several alternative techniques or practices, and if experimentation
is costly, it may be payoff-maximizing to copy others rather than incur the costs of
experimenting (Boyd and Richerson 1985, Conlisk 1988). Conformist transmission
extends to the transmission of transcendental beliefs as well. Such beliefs affirm
techniques where the cost of experimentation is extremely high or infinite, and the
cost of making errors is high as well. This is, in effect, Blaise Pascal’s argument for
the belief in God. This view of religion is supported by Boyer (2001), who models
14 May 6, 2006
Unifying the Behavioral Sciences
transcendental beliefs as cognitive beliefs that coexist and interact with our other
more mundane beliefs. In this view, one conforms to transcendental beliefs because
their truth value has been ascertained by others (relatives, ancestors, prophets), and
are deemed to be as worthy of affirmation as the everyday techniques and practices,
such as norms of personal hygiene, that one accepts on faith, without personal
verification.
Sociology and anthropology recognize the importance of conformist transmis-
sion, but the notion is virtually absent from economic theory. For example, in
economic theory consumers maximize utility and firms maximize profits by con-
sidering only market prices and their own preference and production functions.
In fact, in the face of incomplete information and the high cost of information-
gathering, both consumers and firms in the first instance may simply imitate what
appear to be the successful practices of others, adjust their behavior incrementally in
the face of varying market conditions, and sporadically inspect alternative strategies
in limited areas (Gintis 2004).
Possibly part of the reason the BPC model is so widely rejected in some disci-
plines is the belief that optimization is analytically incompatible with reliance on
imitation and hence with conformist transmission. In fact, the economists’ distaste
for optimization via imitation is not complete (Conlisk 1988, Bikhchandani, Hir-
shleifer and Welsh 1992), and it is simply a doctrinal prejudice. Recognizing that
imitation is an aspect of optimization has the added attractiveness of allowing us to
model cultural change in a dynamic manner: as new cultural forms displace older
forms when they appear to advance the goals of their bearers (Henrich 1997, Henrich
and Boyd 1998, Henrich 2001, Gintis 2003a).
15 May 6, 2006
Unifying the Behavioral Sciences
internalization of norms, initiates are supplied with moral values that induce them
to conform to the duties and obligations of the role-positions they expect to occupy.
The contrast with anthropology and biology could hardly be more complete.
Unlike anthropology, which celebrates the irreducible heterogeneity of cultures,
sociology sees cultures as sharing much in common throughout the world (Brown
1991). In virtually every society, says sociology, youth are pressed to internalize
the value of being trustworthy, loyal, helpful, friendly, courteous, kind, obedient,
cheerful, thrifty, brave, clean, and reverent (famously captured by the Boy Scouts
of America). In biology, values are collapsed into techniques and the machinery of
internalization is unrepresented.
Internalized norms are followed not because of their epistemic truth value, but
because of their moral value. In the language of the BPC model, internalized
norms are accepted not as instruments towards achieving other ends, but rather
as arguments in the preference function that the individual maximizes, or are self-
imposed constraints. For example, individuals who have internalized the value of
“speaking truthfully” will constrain themselves to do so even in some cases where
the net payoff to speaking truthfully would otherwise be negative. Internalized
norms are therefore constitutive in the sense that an individual strives to live up to
them for their own sake. Fairness, honesty, trustworthiness, and loyalty are ends, not
means, and such fundamental human emotions as shame, guilt, pride, and empathy
are deployed by the well-socialized individual to reinforce these prosocial values
when tempted by the immediate pleasures of such “deadly sins” as anger, avarice,
gluttony, and lust.
The human responsiveness to socialization pressures represents the most pow-
erful form of epigenetic transmission found in nature. In effect, human preferences
are programmable, in the same sense that a digital computer can be programmed
to perform a wide variety of tasks. This epigenetic flexibility, which is an emer-
gent property of the complex human brain, in considerable part accounts for the
stunning success of the species Homo sapiens. When people internalize a norm,
the frequency of its occurrence in the population will be higher than if people fol-
low the norm only instrumentally—i.e., only when they perceive it to be in their
material self-interest to do so. The increased incidence of altruistic prosocial be-
haviors permits humans to cooperate effectively in groups (Gintis, Bowles, Boyd
and Fehr 2005).
Given the abiding disarray in the behavioral sciences, it should not be sur-
prising to find that socialization has no conceptual standing outside of sociology,
anthropology, and social psychology, and that most behavioral scientists subsume
it under the general category of “information transmission,” which would make
sense only if moral values expressed matters of fact, which they do not. More-
over, the socialization concept is incompatible with the assumption in economic
16 May 6, 2006
Unifying the Behavioral Sciences
theory that preferences are mostly, if not exclusively, self-regarding, given that
social values commonly involve caring about fairness and the well-being of oth-
ers. Sociology, in turn, systematically ignores the limits to socialization (Tooby and
Cosmides 1992, Pinker 2002) and supplies no theory of the emergence and abandon-
ment of particular values, both of which in fact depend in part on the contribution of
the values to fitness and well-being, as economic and biological theory would sug-
gest (Gintis 2003a,b). Moreover, there are often swift society-wide value changes
that cannot be accounted for by socialization theory (Wrong 1961, Gintis 1975).
When properly qualified, however, and appropriately related to the general theory
of cultural evolution and strategic learning, socialization theory is considerably
strengthened.
In the BPC model, choices give rise to probability distributions over outcomes, the
expected values of which are the payoffs to the choice from which they arose. Game
theory extends this analysis to cases where there are multiple decision makers. In
the language of game theory, players (or agents) are endowed with a set of strategies,
they have certain information concerning the rules of the game, the nature of the
other players and their available strategies. Finally, for each combination of strategy
choices by the players, the game specifies a distribution of individual payoffs to
the players. Game theory predicts the behavior of the players by assuming each
maximizes its preference function subject to its information, beliefs, and constraints
(Kreps 1990).
Game theory is a logical extension of evolutionary theory. To see this, suppose
there is only one replicator, deriving its nutrients and energy from non-living sources
(the sun, the Earth’s core, amino acids produced by electrical discharge, and the
like). The replicator population will then grow at a geometric rate, until it presses on
its environmental inputs. At that point, mutants that exploit the environment more
efficiently will out-compete their less efficient conspecifics, and with input scarcity,
mutants will emerge that “steal” from conspecifics that have amassed valuable
resources. With the rapid growth of such mutant predators, their prey will mutate,
thereby devising means of avoiding predation, and the predators will counter with
their own novel predatory capacities. In this manner, strategic interaction is born
from elemental evolutionary forces. It is only a conceptually short step from this
point to cooperation and competition among cells in a multi-cellular body, among
conspecifics who cooperate in social production, between males and females in a
sexual species, between parents and offspring, and among groups competing for
territorial control (Maynard Smith and Szathmary 1997).
17 May 6, 2006
Unifying the Behavioral Sciences
Historically, game theory emerged not from biological considerations, but rather
from the strategic concerns of combatants in World War II (Von Neumann and
Morgenstern 1944, Poundstone 1992). This led to the widespread caricature of
game theory as applicable only to static confrontations of rational self-regarding
individuals possessed of formidable reasoning and information processing capacity.
Developments within game theory in recent years, however, render this caricature
inaccurate.
First, game theory has become the basic framework for modeling animal be-
havior (Maynard Smith 1982, Alcock 1993, Krebs and Davies 1997), and as a
result has shed its static and hyperrationalistic character, in the form of evolu-
tionary game theory (Gintis 2000a). Evolutionary game theory does not require
the formidable information processing capacities of classical game theory, so disci-
plines that recognize that cognition is scarce and costly can make use of evolutionary
game-theoretic models (Young 1998, Gintis 2000a, Gigerenzer and Selten 2001).
Therefore, we may model individuals as considering only a restricted subset of
strategies (Winter 1971, Simon 1972), and as using rule-of-thumb heuristics rather
than maximization techniques (Gigerenzer and Selten 2001). Game theory is there-
fore a generalized schema that permits the precise framing of meaningful empirical
assertions, but imposes no particular structure on the predicted behavior.
Second, evolutionary game theory has become key to understanding the most
fundamental principles of evolutionary biology. Throughout much of the Twentieth
century, classical population biology did not employ a game-theoretic framework
(Fisher 1930, Haldane 1932, Wright 1931). However, Moran (1964) showed that
Fisher’s Fundamental Theorem—that as long as there is positive genetic variance in
a population, fitness increases over time—is false when more than one genetic locus
is involved. Eshel and Feldman (1984) identified the problem with the population
genetic model in its abstraction from mutation. But how do we attach a fitness
value to a mutant? Eshel and Feldman (1984) suggested that payoffs be modeled
game-theoretically on the phenotypic level, and that a mutant gene be associated
with a strategy in the resulting game. With this assumption, they showed that under
some restrictive conditions, Fisher’s Fundamental Theorem could be restored. Their
results have been generalized by Liberman (1988), Hammerstein and Selten (1994),
Hammerstein (1996), Eshel, Feldman and Bergman (1998) and others.
Third, the most natural setting for biological and social dynamics is game the-
oretic. Replicators (genetic and/or cultural) endow copies of themselves with a
repertoire of strategic responses to environmental conditions, including information
concerning the conditions under which each strategy is to be deployed in reaction
to the character and density of competing replicators. Genetic replicators have
been well understood since the rediscovery of Mendel’s laws in the early twentieth
century. Cultural transmission also apparently occurs at the neuronal level in the
18 May 6, 2006
Unifying the Behavioral Sciences
brain, perhaps in part through the action of mirror neurons, which fire when either
the individual performs a task or undergoes an experience, or when the individual
observes another individual performing the same task or undergoing the same expe-
rience (Williams, Whiten, Suddendorf and Perrett 2001, Rizzolatti, Fadiga, Fogassi
and Gallese 2002, Meltzhoff and Decety 2003). Mutations include replacement of
strategies by modified strategies, and the “survival of the fittest” dynamic (formally
called a replicator dynamic) ensures that replicators with more successful strategies
replace those with less successful (Taylor and Jonker 1978).
Fourth, behavioral game theorists, who used game theory to collect experimen-
tal data concerning strategic interaction, now widely recognize that in many social
interactions, individuals are not self-regarding. Rather, they often care about the
payoffs to and intentions of other players, and will sacrifice to uphold personal
standards of honesty and decency (Fehr and Gächter 2002, Wood 2003, Gintis et
al. 2005, Gneezy 2005). Moreover, humans care about power, self-esteem, and
behaving morally (Gintis 2003b, Bowles and Gintis 2005, Wood 2003). Because
the rational actor model treats action as instrumental towards achieving rewards, it
is often inferred that action itself cannot have reward value. This is an unwarranted
inference. For example, the rational actor model can be used to explain collective
action (Olson 1965), because individuals may place positive value on the process
of acquisition (e.g., “fighting for one’s rights”), and they can value punishing those
who refuse to join in the collective action (Moore, Jr. 1978, Wood 2003). Indeed,
contemporary experimental work indicates that one can apply standard choice the-
ory, including deriving of demand curves, plotting concave indifference curves,
and finding price elasticities, for such preferences as charitable giving and punitive
retribution (Andreoni and Miller 2002).
As a result of its maturation over the past quarter century, game theory is well
positioned to serve as a bridge across the behavioral sciences, providing both a
lexicon for communicating across fields with distinct and incompatible conceptual
systems, and a theoretical tool for formulating a model of human choice that can
serve all the behavioral disciplines.
Many behavioral scientists reject the BPC model and game theory on the basis of
one or more of the following arguments. In each case, I shall indicate why the
objection is not compelling.
19 May 6, 2006
Unifying the Behavioral Sciences
Perhaps the most pervasive critique of the BPC model is that put forward by Herbert
Simon (1982), holding that because information processing is costly and humans
have finite information processing capacity, individuals satisfice rather than maxi-
mize, and hence are only boundedly rational. There is much substance to this view,
including the importance of including information processing costs and limited in-
formation in modeling choice behavior and recognizing that the decision on how
much information to collect depends on unanalyzed subjective priors at some level
(Winter 1971, Heiner 1983). Indeed, from basic information theory and the Sec-
ond Law of Thermodynamics, it follows that all rationality is bounded. However,
the popular message taken from Simon’s work is that we should reject the BPC
model. For example, the mathematical psychologist D. H. Krantz (1991) asserts,
“The normative assumption that individuals should maximize some quantity may
be wrong…People do and should act as problem solvers, not maximizers.” This
is incorrect. As we have seen, as long as individuals have consistent preferences,
they can be modeled as maximizing an objective function. Of course, if there is
a precise objective (e.g., solve the problem with an exogenously given degree of
accuracy), then the information contained in knowledge of preference consistency
may be ignored. But, once the degree of accuracy is treated as endogenous, mul-
tiple objectives compete (e.g., cost and accuracy), and the BPC model cannot be
ignored. This point is lost on even such capable researchers as Gigerenzer and
Selten (2001), who reject the “optimization subject to constraints” method on the
grounds that individuals do not in fact solve optimization problems. However, just
as billiards players do not solve differential equations in choosing their shots, so
decision-makers do not solve Lagrangian equations, even though in both cases we
may use such optimization models to describe their behavior.
20 May 6, 2006
Unifying the Behavioral Sciences
ward,delay until reward materializes), then preferences are indeed time inconsis-
tent. The long-term discount rate can be estimated empirically at about 3% per year
(Huang and Litzenberger 1988, Rogers 1994), but short-term discount rates are often
an order of magnitude or more greater than this (Laibson 1997). Animal studies find
rates are several orders of magnitude higher (Stephens, McLinn and Stevens 2002).
Consonant with these findings, sociological theory stresses that impulse control—
learning to favor long-term over short-term gains—is a major component in the
socialization of youth (Mischel 1974, Power and Chapieski 1986, Grusec and
Kuczynski 1997).
However, suppose we expand the choice space to consist of triples of the form
(reward,current time,time when reward accrues), for example, so that (π1 , t1 , s1 ) >
(π2 , t2 , s2 ) means that at the individual prefers to be at time t1 facing a reward
π1 delivered at time s1 to being at time t2 facing a reward π2 delivered at time
s2 . Then the observed behavior of individuals with discount rates that decline
with the delay become choice consistent, and there are two simple models that
are roughly consistent with the available evidence (and differ only marginally from
each other): hyperbolic and quasi-hyperbolic discounting (Fishburn and Rubinstein
1982, Ainslie and Haslam 1992, Ahlbrecht and Weber 1995, Laibson 1997). The
resulting BPC models allow for sophisticated and compelling economic analyses
of policy alternatives (Laibson, Choi and Madrian 2004).
Other observed instances of prima facie choice inconsistency can be handled
in a similar fashion. For example, in experimental settings, individuals exhibit
status quo bias, loss aversion, and regret—all of which imply inconsistent choices
(Kahneman and Tversky 1979, Sugden 1993). In each case, however, choices be-
come consistent by a simple redefinition of the appropriate choice space. Kahneman
and Tversky’s “prospect theory,” which models status quo bias and loss aversion,
is precisely of this form. Gintis (2006) has shown that this phenomenon has an
evolutionary basis in territoriality in animals and in pre-institutional property rights
in humans.
There remains perhaps the most widely recognized example of inconsistency,
that of preference reversal in the choice of lotteries. Lichtenstein and Slovic (1971)
were the first to find that in many cases, individuals who prefer lottery A to lottery
B are nevertheless willing to take less money for A than for B. Reporting this to
economists several years later, Grether and Plott (1979) asserted “A body of data and
theory has been developed…[that] are simply inconsistent with preference theory”
(p. 623). These preference reversals were explained several years later by Tversky,
Slovic and Kahneman (1990) as a bias toward the higher probability of winning the
lottery choice and toward the higher maximum amount of winnings in monetary
valuation. If this were true for lotteries in general it might compromise the BPC
21 May 6, 2006
Unifying the Behavioral Sciences
model.11 However, the phenomenon has been documented only when the lottery
pairs A and B are so close in expected value that one needs a calculator (or a quick
mind) to determine which would be preferred by an expected value maximizer.
For example, in Grether and Plott (1979) the average difference between expected
values of comparison pairs was 2.51% (calculated from their Table 2, p. 629). The
corresponding figure for Tversky et al. (1990) was 13.01%. When the choices
involve small amounts of money and are so close to equal expected value, it is not
surprising that inappropriate cues are relied upon to determine choice. Moreover,
Berg, Dickhaut and Rietz (2005) have shown that when analysis is limited to studies
that have truth-revealing incentives, preference reversals are well described by a
model of maximization with error.
Another source of inconsistency is that observed preferences may not lead to
the well-being, or even the immediate pleasure, of the decision maker. For exam-
ple, fatty foods and tobacco injure health yet are highly prized, addicts often say
they get no pleasure from consuming their drug of choice, but are driven by an in-
ner compulsion to consume, and individuals with obsessive-compulsive disorders
repeatedly perform actions that they know are irrational and harmful. More gener-
ally, behaviors resulting from excessively high short-term discount rates, discussed
above, are likely to lead to a divergence of choice and welfare.
However, the BPC model is based on the premise that choices are consistent,
not that choices are highly correlated with welfare. Drug addiction, unsafe sex,
unhealthy diet, and other individually welfare-reducing behaviors can be analyzed
with the BPC model, although in such cases preferences and welfare may diverge.
I have argued that we can expect the BPC to hold because, on an evolutionary time
scale, brain characteristics will be selected according to their capacity to contribute
to the fitness of their bearers. But, fitness cannot be equated with well-being in any
creature. Humans, in particular, live in an environment so dramatically different
from that in which our preferences evolved that it seems to be miraculous that we
are as capable as we are of achieving high levels of individual well-being. For
example, in virtually all known cases, fertility increases with per capital material
wealth in a society up to a certain point, and then decreases. This is known as
the demographic transition, and accounts for our capacity to take out increased
technological power in the form of consumption and leisure rather than increased
numbers of offspring (Borgerhoff Mulder 1998). No other known creature behaves
in this fashion. Therefore, our preference predispositions have not “caught up” with
11 I say “might” because in real life individuals generally do not choose among lotteries by observing
or contemplating probabilities and their associated payoffs, but by imitating the behavior of others
who appear to be successful in their daily pursuits. In frequently repeated lotteries, the Law of Large
Numbers ensures that the higher expected value lottery will increase in popularity by imitation without
any calculation by participants.
22 May 6, 2006
Unifying the Behavioral Sciences
our current environment and, especially given the demographic transition and our
excessive present-orientation, they may never catch up (Elster 1979, Akerlof 1991,
O’Donoghue and Rabin 2001).
23 May 6, 2006
Unifying the Behavioral Sciences
smoking is to raise its immediate personal costs, such as being socially stigmatized,
being banned from smoking in public buildings, and being considered impolite,
given the well-known externalities associated with second-hand smoke (Brigden
and De Beyer 2003).
Broadening the rational actor model beyond its traditional form in neoclassical
economics runs the risk of developing unverifiable and post hoc theories, as our
ability to theorize outpaces our ability to test theories. Indeed, the folklore among
economists dating back at least to Becker and Stigler (1977) is that “you can always
explain any bizarre behavior by assuming sufficiently exotic preferences.”
This critique was telling before researchers had the capability of actually mea-
suring preferences and testing the cogency of models with nonstandard preferences
(i.e., preferences over things other than marketable commodities, forms of labor,
and leisure). However, behavioral game theory now provides the methodological
instruments for devising experimental techniques that allow us to estimate prefer-
ences with some degree of accuracy, (Gintis 2000a, Camerer 2003). Moreover, we
often find that the appropriate experimental design variations can generate novel data
allowing us to distinguish among models that are equally powerful in explaining the
existing data (Tversky and Kahneman 1981, Kiyonari, Tanida andYamagishi 2000).
Finally, because behavioral game-theoretic predictions can be systematically tested,
the results can be replicated by different laboratories (Plott 1979, V. Smith 1982,
Sally, 1995), and models with very few nonstandard preference parameters, exam-
ples of which are provided in Section 10 below, can be used to explain a variety of
observed choice behavior,
The BPC model assumes that individuals have stable preferences and beliefs that
are functions of the individual’s personality and current needs. Yet, in many cases
laboratory experiments show that individuals can be induced to make choices over
payoffs based on subtle or obvious cues that ostensibly do not affect the value of the
payoffs to the decision maker. For example, if a subjects’ partner in an experimental
game is described as an “opponent,” or the game itself is described as a “bargaining
game,” subjects may make very different choices than when the partner is described
as a “teammate”, or the game is described as a community participation game.
Similarly, a subject in an experimental game may reject an offer if made by his
bargaining partner, but accept the same offer if made by the random draw of a
24 May 6, 2006
Unifying the Behavioral Sciences
The BPC model permits us to infer the beliefs and preferences of individuals from
their choices under varying constraints. Such inferences are valid, however, only
if individuals can intelligently vary their behavior in response to novel conditions.
While it is common for behavioral scientists who reject the BPC model to explain an
observed behavior as the result of an error or confusion on the part of the individual,
the BPC model is less tolerant of such explanations if individuals are reasonably
well-informed and the choice setting reasonably transparent and easily analyzable.
Evidence from experimental psychology over the past 40 years has led some
psychologists to doubt the capacity of individuals to reason sufficiently accurately
25 May 6, 2006
Unifying the Behavioral Sciences
to warrant the BPC presumption of subject intelligence. For example, in one well-
known experiment performed by Tversky and Kanheman (1983), a young woman
Linda is described as politically active in college and highly intelligent, then the
subject is asked which of the following two statements is more likely: “Linda is
a bank teller” or “Linda is a bank teller and is active in the feminist movement.”
Many subjects rate the second statement more likely, despite the fact that elementary
probability theory asserts that if p implies q, then p cannot be more likely than q.
Because the second statement implies the first, it cannot be more likely than the
first.
I personally know many people (though not scientists) who give this “incorrect”
answer, and I never have observed these individuals making simple logical errors
in daily life. Indeed, in the literature on the “Linda problem” several alternatives
to faulty reasoning have been offered. One highly compelling alternative is based
on the notion that in normal conversation, a listener assumes that any information
provided by the speaker is relevant to the speaker’s message (Grice 1975). Applied to
this case, the norms of conversation lead the subject to believe that the experimenter
wants Linda’s politically active past to be taken adequately into account (Hilton
1995, Wetherick 1995). Moreover, the meaning of such terms as “more likely”
or “higher probability” are vigorously disputed even in the theoretical literature,
and hence are likely to have a different meaning for the average subject versus for
the expert. For example, if I were given two piles of identity folders and ask to
search through them to find the one belonging to Linda, and one of the piles was
“all bank tellers” while the other was “all bank tellers who are active in the feminist
movement,” I would surely look through the second (doubtless much smaller) pile
first, even though I am well aware that there is a “higher probability” that Linda’s
folder is in the former pile rather than the latter one.
More generally, subjects may appear irrational because basic terms have dif-
ferent meanings in propositional logic versus in everyday logical inference. For
example, “if p then q” is true in formal logic except when p is true and q is false.
In everyday usage “if p then q” may be interpreted as a material implication, in
which there is something about p that causes q to be the case. In particular, in
material logic “p implies q” means “p is true and this situation causes q to be
true.” Similarly, “if France is in Africa, then Paris is in Europe” is true in propo-
sitional logic, but false as a material implication. Part of the problem is also that
individuals without extensive academic training simply lack the expertise to follow
complex chains of logic, so psychology experiments often exhibit a high level of
performance error (Cohen 1981; see section 11). For example, suppose Pat and
Kim live in a certain town where all men have beards and all women wear dresses.
Then the following can be shown to be true in propositional logic: “Either if Pat is
a man then Kim wears a dress or if Kim is a woman, then Pat has a beard.” It is
26 May 6, 2006
Unifying the Behavioral Sciences
quite hard to see why this is formally, true, and it is not true if the implications are
material. Finally, the logical meaning of “if p then q” can be context dependent.
For example, “if you eat dinner (p), you may go out to play (q)” formally means
“you may go out to play (q) only if you eat dinner (p).”
We may apply this insight to an important strand of experimental psychology that
purports to have shown that subjects systematically deviate from simple principles of
logical reasoning. In a widely replicated study, Wason (1966) showed subjects cards
each of which had a “1” or “2” on one side and “A” or “B” on the other, and stated
the following rule: a card with a vowel on one side must have an odd number on the
other. The experimenter then showed each subject four cards, one showing “1”, one
showing “2”, one showing “A”, and one showing “B”, and asked the subject which
cards must be turned over to check whether the rule was followed. Typically, only
about 15% of college students point out the correct cards (“A” and “2”). Subsequent
research showed that when the problem is posed in more concrete terms, such as
“any person drinking beer must be more than 18,” the correct response rate increases
considerably (Cheng and Holyoak 1985, Cosmides 1989, Stanovich 1999, Shafir
and LeBoeuf 2002). This accords with the observation that most individuals do not
appear to have difficulty making and understanding logical arguments in everyday
life.
Just as the rational actor model began to take hold in the mid-Twentieth century,
vigorous empirical objections began to surface. The first was Allais (1953), who
described cases in which subjects exhibited clear inconsistency in choosing among
simple lotteries. It has been shown that Allais’ examples can be explained by
regret theory (Bell 1982, Loomes and Sugden 1982), which can be represented by
consistent choices over pairs of lotteries (Sugden 1993).
Close behind Allais came the famous Ellsberg Paradox (Ellsberg 1961), which
can be shown to violate the most basic axioms of choice under uncertainty. Consider
two urns. Urn A has 51 red balls and 49 white balls. Urn B also has 100 red and
white balls, but the fraction of red balls is unknown. Subjects are asked to choose in
two situations. In each, the experimenter draws one ball from each urn but the two
balls remain hidden from the subject’s sight. In the first situation, the subject can
choose the ball that was drawn from urn A or urn B, and if the ball is red, the subject
wins $10. In the second situation, the subject again can choose the ball drawn from
urn A or urn B, and if the ball is white, the subject wins $10. Many subjects choose
the ball drawn from urn A in both situations. This obviously violates the expected
utility principle, no matter what probability the subject places on the probability the
27 May 6, 2006
Unifying the Behavioral Sciences
28 May 6, 2006
Unifying the Behavioral Sciences
salient and personal examples, they reverse lottery choices when the same lottery
is described by emphasizing probabilities rather than monetary payoffs, or when
described in term of losses from a high baseline as opposed to gains from a low
baseline, and they treat proactive decisions differently from passive decisions even
when the outcomes are exactly the same, and when outcomes are described in terms
of probabilities as opposed to frequencies (Kahneman, Slovic and Tversky 1982,
Kahneman and Tversky 2000).
These findings are important for understanding human decision making and
for formulating effective social policy mechanisms where complex statistical de-
cisions must be made. However, these findings are not a threat to the BPC model
(Gigerenzer and Selten 2001). They are simply performance errors in the form of
incorrect beliefs as to how payoffs can be maximized.13
Statistical decision theory did not exist until recently. Before the contributions
of Bernoulli, Savage, von Neumann and other experts, no creature on Earth knew
how to value a lottery. It takes years of study to feel at home with the laws of
probability. Moreover, it is costly, in terms of time and effort, to apply these laws
even if we know them. Of course, if the stakes are high enough, it is worthwhile to
go to the effort, or engage an expert who will do it for you. But generally, we apply
a set of heuristics that more or less get the job done (Gigerenzer and Selten 2001).
Among the most prominent heuristics is simply imitation: decide what class of
phenomenon is involved, find out what people “normally do” in that situation, and
do it. If there is some mechanism leading to the survival and growth of relatively
successful behaviors and if the problem in question recurs with sufficient regularity,
the choice-theoretic solution will describe the winner of a dynamic social process
of trial, error, and replication through imitation.
Game theory predicts that rational agents will play Nash equilibria. Because my
proposed framework includes both game theory and rational agents, I must address
that fact that in important cases, the game theoretic prediction is ostensibly falsified
by the empirical evidence. The majority of examples of this kind arise from the
13 In a careful review of the field, Shafir and LeBoeuf (2002) reject the performance error inter-
pretation of these results, calling this a “trivialization” of the findings. They come to this conclusion
by asserting that performance errors must be randomly distributed, whereas the errors found in the
literature are systematic and reproducible. These authors, however, are mistaken in believing that
performance errors must be random. Ignoring base rates in evaluating probabilities or finding risk
in the Ellsberg two urn problems are surely performance errors, but the errors are quite systematic.
Similarly, folk intuitions concerning probability theory lead to highly reproducible results, although
incorrect.
29 May 6, 2006
Unifying the Behavioral Sciences
assumption that individuals are self-regarding, which can be dropped without vio-
lating the principles of game theory. Game theory also offers solutions to problems
of cooperation and coordination that are never found in real life, but in this case, the
reason is that the game theorists assume perfect information, the absence of errors,
the use of solution concepts that lack plausible dynamical stability properties, or
other artifices without which the proposed solution would not work (Gintis 2005).
However, in many cases, rational individuals simply do not play Nash equilibria at
all under plausible conditions.
(2,2) (3,2) (3,3) (4,3) (99,99) (100,99)
M J M J M J
r r r r r r r r r (100,100)
C C C C C C C
D D D D D D
30 May 6, 2006
Unifying the Behavioral Sciences
distribution, many different actions can result. Indeed, when people play this game,
they generally cooperate at least until the final few rounds. This, moreover, is an
eminently the correct solution to the problem, and much more lucrative that not the
Nash equilibrium. Of course, one could argue that both players must have the same
subjective probability distribution (this is called the common priors assumption),
in which case (assuming common priors are common knowledge) there is only one
equilibrium, the Nash equilibrium. But, it is hardly plausible to assume two players
have the same subjective probability distribution over the types of their opponents
without giving a mechanism that would produce this result.14 In a famous paper
Nobel prize winning economist John Harsanyi (1967) argued that common priors
follow from the assumption that individuals are rational, but this argument depends
on a notion of rationality that goes far beyond choice consistency, and has not
received empirical support (Kurz 1997).
In real world applications of game theory, I conclude, we must have plausible
grounds for believing that the equilibrium concept used is appropriate. Simply
assuming that rationality implies Nash equilibrium, as is the case in classical game
theory, is generally inappropriate. Evolutionary game theory restores the centrality
of the Nash equilibrium concept, because stable equilibria of the replicator dynamic
(and related “monotone” dynamics) are necessarily Nash equilibria. Moreover, the
examples given in next section are restricted to games that are sufficiently simple
that the sorts of anomalies discussed above are not present, and the Nash equilibrium
criterion is appropriate.
31 May 6, 2006
Unifying the Behavioral Sciences
32 May 6, 2006
Unifying the Behavioral Sciences
Andreoni 1995, Fehr, Gächter and Kirchsteiger 1997, Fehr, Kirchsteiger and Riedl
1998, Gächter and Fehr 1999, Fehr and Gächter 2000, Fehr and Gächter 2002, Hen-
rich, Boyd, Bowles, Camerer, Fehr and Gintis 2005). Moreover, it is probable that
this other-regarding behavior is a prerequisite for cooperation in large groups of non-
kin, because the theoretical models of cooperation in large groups of self-regarding
non-kin in biology and economics do not apply to some important and frequently
observed forms of human cooperation (Boyd and Richerson 1992, Gintis 2005).
Another form of prosocial behavior conflicting with the maximization of per-
sonal material gain is that of maintaining such character virtues as honesty and
promise-keeping, even when there is no chance of being penalized for unvirtuous
behavior. An example of such behavior is reported by Gneezy (2005), who studied
450 undergraduate participants paired off to play three games of the following form.
Player 1 would be shown two pairs of payoffs, A:(x, y) and B:(z, w) where x, y, z,
and w are amounts of money with x < z and y > w. Player 1 could then say to
Player 2, who could not see the amounts of money, either “Option A will earn you
more money than option B,”or “Option B will earn you more money than option A.”
The first game was A:(5,6) vs. B:(6,5) so player 1 could gain 1 by lying and being
believed, while imposing a cost of 1 on player 2. The second game was A:(5,15) vs.
B:(6,5) so player 1 could gain 10 by lying and being believed, while still imposing
a cost of 1 on player 2. The third game was A:(5,15) vs. B:(15,5), so player 1 could
gain 10 by lying and being believed, while imposing a cost of 10 on player 2.
Before starting play, Gneezy asked Player 1’s whether they expected their advice
to be followed, inducing honest responses by promising to reward subjects whose
guesses were correct. He found that 82% of Player 1’s expected their advice to be
followed (the actual number was 78%). It follows from the Player 1 expectations
that if they were self-regarding, they would always lie and recommend B to Player
2. In fact, in game 2, where lying was very costly to Player 2 and the gain to lying
for player 1 was small, only 17% of subjects lied. In game 1, where the cost of
lying to Player 2 was only 1 but the gain to Player 1 was the same as in Game 2,
36% lied. In other words, subjects were loathe to lie, but considerably more so
when it was costly to their partner. In game three, where the gain from lying was
large for Player 1, and equal to the loss to Player 2, fully 52% lied. This shows
that many subjects are willing to sacrifice material gain to avoid lying in a one-shot,
anonymous interaction, their willingness to lie increasing with an increased cost of
truth-telling to themselves, and decreasing with an increase in their partner’s cost of
begin deceived. Similar results were found by Boles, Croson and Murnighan (2000)
and Charness and Dufwenberg (2004). Gunnthorsdottir, McCabe and Smith (2002)
and Burks, Carpenter and Verhoogen (2003) have shown that a social-psychological
measure of “Machiavellianism” predicts which subjects are likely to be trustworthy
and trusting.
33 May 6, 2006
Unifying the Behavioral Sciences
In the simplest formulation of the rational actor model, beliefs do not explicitly ap-
pear. In the real world, however, the probabilities of various outcomes in a lottery
are rarely objectively known, and hence must generally be subjectively constructed
as part of an individual’s belief system. Anscombe and Aumann (1963) extended
the Savage model to preferences over bundles consisting of “states of the world”
and payoff bundles, and they showed that if certain consistency axioms hold, the
individual could be modeled as maximizing subject to a set of subjective probabili-
ties (beliefs) over states. Were these axioms universally plausible, beliefs could be
derived in the same way as are preferences. However, at least one of these axioms,
the so-called state-independence axiom, which states that preferences over payoffs
are independent of the states in which they occur, is generally not plausible.
It follows that beliefs are the underdeveloped member of the BPC trilogy. Except
for Bayes’ rule (Gintis 2000a): Ch. 17, there is no compelling analytical theory of
how a rational agent acquires and updates beliefs, although there are many partial
theories (Kuhn 1962, Polya 1990, Boyer 2001, Jaynes 2003).
Beliefs enter the decision process in several potential ways. First, individuals
may not have perfect knowledge concerning how their choices affect their welfare.
This is most likely to be the case in an unfamiliar setting, of which the experimental
laboratory is often a perfect example. In such cases, when forced to choose, individ-
uals “construct” their preferences on the spot by forming beliefs based on whatever
partial information is present at the time of choice (Slovic 1995). Understanding
this process of belief formation is a demanding research task.
Second, often the actual actions a ∈ A available to an individual will differ
from the actual payoffs π ∈ that appear in the individual’s preference function.
The mapping β : A → the individual deploys to maximize payoff is a belief
system concerning objective reality, and it can differ from the correct mapping
β ∗ : A → . For example, a gambler may want to maximize expected winnings,
but may believe in the erroneous Law of Small Numbers (Rabin 2002). Errors of
this type include the performance errors discussed in section 9.6.
Third, there is considerable evidence that beliefs directly affect well-being, so
individuals may alter their beliefs as part of their optimization program. Self-
serving beliefs, unrealistic expectations, and projection of one’s own preferences
on others are important examples. The trade-off here is that erroneous beliefs may
add to well-being, but acting on these beliefs may lower other payoffs (Bodner and
Prelec 2002, Benabou and Tirole 2002).
34 May 6, 2006
Unifying the Behavioral Sciences
12 Conclusion
I would like to thank George Ainslie, Rob Boyd, Dov Cohen, Ernst Fehr, Bar-
bara Finlay, Thomas Getty, Dennis Krebs, Joe Henrich, Daniel Kahneman, Laurent
Keller, Joachim Krueger, Larry Samuelson, and especially Marc Hauser and anony-
mous referees of this journal for helpful comments, and the John D. and Catherine
T. MacArthur Foundation for financial support.
35 May 6, 2006
Unifying the Behavioral Sciences
References
36 May 6, 2006
Unifying the Behavioral Sciences
Bandura, Albert, Social Learning Theory (Englewood Cliffs, NJ: Prentice Hall,
1977).
Becker, Gary S. and George J. Stigler, “De Gustibus Non Est Disputandum,” Amer-
ican Economic Review 67,2 (March 1977):76–90.
and Kevin M. Murphy, “A Theory of Rational Addiction,” Journal of Political
Economy 96,4 (August 1988):675–700.
, Michael Grossman, and Kevin M. Murphy, “An Empirical Analysis of Cigarette
Addiction,” American Economic Review 84,3 (June 1994):396–418.
Beer, J. S., E. A. Heerey, D. Keltner, D. Skabini, and R. T. Knight, “The Regulatory
Function of Self-conscious Emotion: Insights from Patients with Orbitofrontal
Damage,” Journal of Personality and Social Psychology 65 (2003):594–604.
Bell, D. E., “Regret in Decision Making under Uncertainty,” Operations Research
30 (1982):961–981.
Benabou, Roland and Jean Tirole, “Self Confidence and Personal Motivation,” Quar-
terly Journal of Economics 117,3 (2002):871–915.
Benedict, Ruth, Patterns of Culture (Boston: Houghton Mifflin, 1934).
Berg, Joyce E., John W. Dickhaut, and Thomas A. Rietz, “Preference Reversals: The
Impact of Truth-Revealing Incentives,” 2005. College of Business, University of
Iowa.
Bernheim, B. Douglas, “Rationalizable Strategic Behavior,” Econometrica 52,4
(July 1984):1007–1028.
Bikhchandani, Sushil, David Hirshleifer, and Ivo Welsh, “A Theory of Fads, Fash-
ion, Custom, and Cultural Change as Informational Cascades,” Journal of Polit-
ical Economy 100 (October 1992):992–1026.
Binmore, Ken, “Modelling Rational Players: I,” Economics and Philosophy 3
(1987):179–214.
Black, Fisher and Myron Scholes, “The Pricing of Options and Corporate Liabili-
ties,” Journal of Political Economy 81 (1973):637–654.
Blount, Sally, “When Social Outcomes Aren’t Fair: The Effect of Causal Attribu-
tions on Preferences,” Organizational Behavior & Human Decision Processes
63,2 (August 1995):131–144.
Bodner, Ronit and Drazen Prelec, “Self-signaling and Diagnostic Utility in Every-
day Decision Making,” in Isabelle Brocas and Juan D. Carillo (eds.) Collected
Essays in Psychology and Economics (Oxford: Oxford University Press, 2002)
pp. 105–123.
Boehm, Christopher, Hierarchy in the Forest: The Evolution of Egalitarian Behavior
(Cambridge, MA: Harvard University Press, 2000).
37 May 6, 2006
Unifying the Behavioral Sciences
Boles, Terry L., Rachel T. A. Croson, and J. Keith Murnighan, “Deception and
Retribution in Repeated Ultimatum Bargaining,” Organizational Behavior and
Human Decision Processes 83,2 (2000):235–259.
Bonner, John Tyler, The Evolution of Culture in Animals (Princeton, NJ: Princeton
University Press, 1984).
Borgerhoff Mulder, Monique, “The Demographic Transition: Are we any Closer
to an Evolutionary Explanation?,” Trends in Ecology and Evolution 13,7 (July
1998):266–270.
Bowles, Samuel and Herbert Gintis, “Walrasian Economics in Retrospect,” Quar-
terly Journal of Economics (November 2000):1411–1439.
and , “Prosocial Emotions,” in Lawrence E. Blume and Steven N. Durlauf
(eds.) The Economy As an Evolving Complex System III (Santa Fe, NM: Santa
Fe Institute, 2005).
Boyd, Robert and Peter J. Richerson, Culture and the Evolutionary Process
(Chicago: University of Chicago Press, 1985).
and , “The Evolution of Reciprocity in Sizable Groups,” Journal of Theoretical
Biology 132 (1988):337–356.
and , “Punishment Allows the Evolution of Cooperation (or Anything Else)
in Sizeable Groups,” Ethology and Sociobiology 113 (1992):171–195.
Boyer, Pascal, Religion Explained: The Human Instincts That Fashion Gods, Spirits
and Ancestors (London: William Heinemann, 2001).
Brigden, Linda Waverley and Joy De Beyer, Tobacco Control Policy: Stories from
Around the World (Washington, DC: World Bank, 2003).
Brown, Donald E., Human Universals (New York: McGraw-Hill, 1991).
Brown, J. H. and M. V. Lomolino, Biogeography (Sunderland, MA: Sinauer, 1998).
Burks, Stephen V., Jeffrey P. Carpenter, and Eric Verhoogen, “Playing Both Roles in
the Trust Game,” Journal of Economic Behavior and Organization 51 (2003):195–
216.
Camerer, Colin, Behavioral Game Theory: Experiments in Strategic Interaction
(Princeton, NJ: Princeton University Press, 2003).
Camille, N., “The Involvement of the Orbitofrontal Cortex in the Experience of
Regret,” Science 304 (2004):1167–1170.
Cavalli-Sforza, Luca L. and Marcus W. Feldman, “Theory and Observation in Cul-
tural Transmission,” Science 218 (1982):19–27.
Cavalli-Sforza, Luigi L. and Marcus W. Feldman, Cultural Transmission and Evo-
lution (Princeton, NJ: Princeton University Press, 1981).
Charness, Gary and Martin Dufwenberg, “Promises and Partnership,” October 2004.
University of California and Santa Barbara.
38 May 6, 2006
Unifying the Behavioral Sciences
39 May 6, 2006
Unifying the Behavioral Sciences
Fehr, Ernst and Simon Gächter, “Cooperation and Punishment,” American Eco-
nomic Review 90,4 (September 2000):980–994.
and , “Altruistic Punishment in Humans,” Nature 415 (10 January 2002):137–
140.
, Georg Kirchsteiger, and Arno Riedl, “Gift Exchange and Reciprocity in Com-
petitive Experimental Markets,” European Economic Review 42,1 (1998):1–34.
, Simon Gächter, and Georg Kirchsteiger, “Reciprocity as a Contract Enforcement
Device: Experimental Evidence,” Econometrica 65,4 (July 1997):833–860.
Feldman, Marcus W. and Lev A. Zhivotovsky, “Gene-Culture Coevolution: To-
ward a General Theory of Vertical Transmission,” Proceedings of the National
Academy of Sciences 89 (December 1992):11935–11938.
Fishburn, Peter C. and Ariel Rubinstein, “Time Preference,” Econometrica 23,3
(October 1982):667–694.
Fisher, Ronald A., The Genetical Theory of Natural Selection (Oxford: Clarendon
Press, 1930).
Fudenberg, Drew and Eric Maskin, “The Folk Theorem in Repeated Games
with Discounting or with Incomplete Information,” Econometrica 54,3 (May
1986):533–554.
, David K. Levine, and Eric Maskin, “The Folk Theorem with Imperfect Public
Information,” Econometrica 62 (1994):997–1039.
Gächter, Simon and Ernst Fehr, “Collective Action as a Social Exchange,” Journal
of Economic Behavior and Organization 39,4 (July 1999):341–369.
Gadagkar, Raghavendra, “On Testing the Role of Genetic Asymmetries Created by
Haplodiploidy in the Evolution of Eusociality in the Hymenoptera,” Journal of
Genetics 70,1 (April 1991):1–31.
Ghiselin, Michael T., The Economy of Nature and the Evolution of Sex (Berkeley:
University of California Press, 1974).
Gigerenzer, Gerd and Reinhard Selten, Bounded Rationality (Cambridge, MA: MIT
Press, 2001).
Gilovich, T., R. Vallone, and A. Tversky, “The Hot Hand in Basketball: On the Mis-
perception of Random Sequences,” Journal of Personality and Social Psychology
17 (1985):295–314.
Gintis, Herbert, “A Radical Analysis of Welfare Economics and Individual Devel-
opment,” Quarterly Journal of Economics 86,4 (November 1972):572–599.
, “Welfare Economics and Individual Development: A Reply to Talcott Parsons,”
Quarterly Journal of Economics 89,2 (February 1975):291–302.
, Game Theory Evolving (Princeton, NJ: Princeton University Press, 2000).
40 May 6, 2006
Unifying the Behavioral Sciences
41 May 6, 2006
Unifying the Behavioral Sciences
Gunnthorsdottir, Anna, Kevin McCabe, and Vernon Smith, “Using the Machiavel-
lianism Instrument to Predict Trustworthiness in a Bargaining Game,” Journal of
Economic Psychology 23 (2002):49–66.
Haldane, J. B. S., The Causes of Evolution (London: Longmans, Green & Co.,
1932).
Hamilton, William D., “The Evolution of Altruistic Behavior,” American Naturalist
96 (1963):354–356.
Hammerstein, Peter, “Darwinian Adaptation, Population Genetics and the Streetcar
Theory of Evolution,” Journal of Mathematical Biology 34 (1996):511–532.
, “Why Is Reciprocity So Rare in Social Animals?,” in Peter Hammerstein (ed.)
Genetic and Cultural Evolution of Cooperation (Cambridge, MA: The MIT Press,
2003) pp. 83–93.
and Reinhard Selten, “Game Theory and Evolutionary Biology,” in Robert J.
Aumann and Sergiu Hart (eds.) Handbook of Game Theory with Economic Ap-
plications (Amsterdam: Elsevier, 1994) pp. 929–993.
Harsanyi, John C., “Games with Incomplete Information Played by Bayesian Play-
ers, Parts I, II, and III,” Behavioral Science 14 (1967):159–182, 320–334, 486–
502.
Hechter, Michael and Satoshi Kanazawa, “Sociological Rational Choice,” Annual
Review of Sociology 23 (1997):199–214.
Heiner, Ronald A., “The Origin of Predictable Behavior,” American Economic
Review 73,4 (1983):560–595.
Henrich, Joe, Robert Boyd, Samuel Bowles, Colin Camerer, Ernst Fehr, and Herbert
Gintis, “Economic Man’ in Cross-Cultural Perspective: Behavioral Experiments
in 15 small-scale societies,” Behavioral and Brain Sciences (2005).
Henrich, Joseph, “Market Incorporation, Agricultural Change and Sustainability
among the Machiguenga Indians of the Peruvian Amazon,” Human Ecology
25,2 (June 1997):319–351.
, “Cultural Transmission and the Diffusion of Innovations,” American Anthro-
pologist 103 (2001):992–1013.
and Francisco Gil-White, “The Evolution of Prestige: Freely Conferred Status
as a Mechanism for Enhancing the Benefits of Cultural Transmission,” Evolution
and Human Behavior 22 (2001):1–32.
and Robert Boyd, “The Evolution of Conformist Transmission and the Emer-
gence of Between-Group Differences,” Evolution and Human Behavior 19
(1998):215–242.
Herrnstein, Richard, David Laibson, and Howard Rachlin, The Matching Law:
Papers on Psychology and Economics (Cambridge, MA: Harvard University
42 May 6, 2006
Unifying the Behavioral Sciences
Press, 1997).
Herrnstein, Richard J., “Relative and Absolute Strengths of Responses as a Function
of Frequency of Reinforcement,” Journal of Experimental Analysis of Animal
Behavior 4 (1961):267–272.
Hilton, Denis J., “The Social Context of Reasoning: Conversational Inference and
Rational Judgment,” Psychological Bulletin 118,2 (1995):248–271.
Hirsch, Paul, Stuart Michaels, and Ray Friedman, “Clean Models vs. Dirty Hands:
Why Economics is Different from Sociology,” in Sharon Zukin and Paul DiMag-
gio (eds.) Structures of Capital: The Social Organization of the Economy (New
York: Cambridge University Press, 1990) pp. 39–56.
Holden, C. J., “Bantu Language Trees Reflect the Spread of Farming Across Sub-
Saharan Africa: A Maximum-parsimony Analysis,” Proceedings of the Royal
Society of London Series B 269 (2002):793–799.
and Ruth Mace, “Spread of Cattle Led to the Loss of Matrilineal Descent in
Africa: A Coevolutionary Analysis,” Proceedings of the Royal Society of London
Series B 270 (2003):2425–2433.
Huang, Chi-Fu and Robert H. Litzenberger, Foundations for Financial Economics
(Amsterdam: Elsevier, 1988).
Huxley, Julian S., “Evolution, Cultural and Biological,” Yearbook of Anthropology
(1955):2–25.
Jablonka, Eva and Marion J. Lamb, Epigenetic Inheritance and Evolution: The
Lamarckian Case (Oxford: Oxford University Press, 1995).
James, William, “Great Men, Great Thoughts, and the Environment,” Atlantic
Monthly 46 (1880):441–459.
Jaynes, E. T., Probability Theory: The Logic of Science (Cambridge: Cambridge
University Press, 2003).
Kahneman, Daniel and Amos Tversky, “Prospect Theory: An Analysis of Decision
Under Risk,” Econometrica 47 (1979):263–291.
and , Choices, Values, and Frames (Cambridge: Cambridge University Press,
2000).
, Paul Slovic, and Amos Tversky, Judgment under Uncertainty: Heuristics and
Biases (Cambridge, UK: Cambridge University Press, 1982).
Kiyonari, Toko, Shigehito Tanida, and Toshio Yamagishi, “Social Exchange and
Reciprocity: Confusion or a Heuristic?,” Evolution and Human Behavior 21
(2000):411–427.
Kollock, Peter, “Transforming Social Dilemmas: Group Identity and Cooperation,”
in Peter Danielson (ed.) Modeling Rational and Moral Agents (Oxford: Oxford
University Press, 1997).
43 May 6, 2006
Unifying the Behavioral Sciences
44 May 6, 2006
Unifying the Behavioral Sciences
45 May 6, 2006
Unifying the Behavioral Sciences
46 May 6, 2006
Unifying the Behavioral Sciences
Real, Leslie and Thomas Caraco, “Risk and Foraging in Stochastic Environments,”
Annual Review of Ecology and Systematics 17 (1986):371–390.
Richerson, Peter J. and Robert Boyd, “The Evolution of Ultrasociality,” in I. Eibl-
Eibesfeldt and F.K. Salter (eds.) Indoctrinability, Idology and Warfare (NewYork:
Berghahn Books, 1998) pp. 71–96.
and , Not By Genes Alone (Chicago: University of Chicago Press, 2004).
Rivera, M. C. and J. A. Lake, “The Ring of Life Provides Evidence for a Genome
Fusion Origin of Eukaryotes,” Nature 431 (2004):152–155.
Rizzolatti, G., L. Fadiga, L Fogassi, and V. Gallese, “From Mirror Neurons to Im-
itation: Facts and Speculations,” in Andrew N. Meltzhoff and Wolfgang Prinz
(eds.) The Imitative Mind: Development, Evolution and Brain Bases (Cam-
bridge: Cambridge University Press, 2002) pp. 247–266.
Rogers, Alan, “Evolution of Time Preference by Natural Selection,” American Eco-
nomic Review 84,3 (June 1994):460–481.
Rosenthal, Robert W., “Games of Perfect Information, Predatory Pricing and the
Chain-Store Paradox,” Journal of Economic Theory 25 (1981):92–100.
Rozin, Paul, L. Lowery, S. Imada, and Jonathan Haidt, “The CAD Triad Hypothesis:
A Mapping Between Three Moral Emotions (Contempt, Anger, Disgust) and
Three Moral Codes (Community, Autonomy, Divinity),” Journal of Personality
& Social Psychology 76 (1999):574–586.
Saffer, Henry and Frank Chaloupka, “The Demand for Illicit Drugs,” Economic
Inquiry 37,3 (1999):401–11.
Sally, David, “Conversation and Cooperation in Social Dilemmas,” Rationality and
Society 7,1 (January 1995):58–92.
Schall, J. D. and K. G. Thompson, “Neural Selection and Control of Visually Guided
Eye Movements,” Annual Review of Neuroscience 22 (1999):241–259.
Schrödinger, Edwin, What is Life?: The Physical Aspect of the Living Cell (Cam-
bridge: Cambridge University Press, 1944).
Schulkin, J., Roots of Social Sensitivity and Neural Function (Cambridge, MA:
MIT Press, 2000).
Schultz, W., P. Dayan, and P. R. Montague, “A Neural Substrate of Prediction and
Reward,” Science 275 (1997):1593–1599.
Seeley, Thomas D., “Honey Bee Colonies are Group-Level Adaptive Units,” The
American Naturalist 150 (1997):S22–S41.
Segerstrale, Ullica, Defenders of the Truth: The Sociobiology Debate (Oxford:
Oxford University Press, 2001).
Selten, Reinhard, “In Search of a Better Understanding of Economic Behavior,”
in Arnold Heertje (ed.) The Makers of Modern Economics, vol. 1 (Harvester
47 May 6, 2006
Unifying the Behavioral Sciences
48 May 6, 2006
Unifying the Behavioral Sciences
49 May 6, 2006
Unifying the Behavioral Sciences
50 May 6, 2006