Review Essay
Reality Chunking
David Roden
DeLanda, Manuel (2011), Philosophy and Simulation: The Emergence of Synthetic Reason, London:
Continuum, 226 pp.
Until recently, most post-Kantian continental philosophers were default anti-realists and anti-
naturalists. Continental anti-realisms typically reduced the objectivity of a thing to a relation internal
to some transcendental organizing principle such as subjectivity or discourse. Anti-realism or
“correlationism” (employing Quentin Meillassoux’s handy coinage) also proves a convenient foil for
the naturalisms propounded by analytic philosophers like Daniel Dennett, Jerry Fodor and Paul
Churchland (Churchland 1986; Dennett 1995; Fodor 1990). Naturalists hold that philosophical
accounts of things should be constrained by the findings of empirical science. Anti-naturalists reply
that the intelligibility of scientific claims depends on a transcendental organizer whose role in
“making” objectivity renders it science-proof (Roden 2004: 75).
Anti-realism has become less entrenched in recent Continental thought due, partly, to the polemics of
“speculative realists” such as Meillassoux and Graham Harman (Meillassoux: 2006). However,
Manuel DeLanda remains a singleton on this realism-friendly scene; for, unlike Harman or
Meillassoux, he espouses a naturalism derived from a materialist reconstruction of Deleuze’s account
of the relationship between the virtual and the actual. In Deleuzean philosophy the virtual/actual
replaces the more familiar modal distinction between actuality and possibility. Deleuze claims that if
the actual is merely the instantiation of the possible, then it resembles the thing as represented. For
Deleuze, this renders the category of existence nugatory since “all it does is double like with
like” (Deleuze 1994, 212). It also fails to address the relationship between concrete individuals and
their conditions of possibility since the latter will always be too capacious to generate “real
experience in its quality, intensity, and specificity” (Lord 2008).
The virtual, on the other hand, does not harbor the actual as a conceptual possibility but expresses it as
an effect of dynamical differences or “intensities” (214). However, the virtual needs to be organized to
explain the regularity and structure of the actual. Deleuze calls these organizing principles “Ideas” or
“multiplicities”. Ideas are not concepts or representations but abstract dynamisms constraining the
formation of individuals without programming how they form in particular environments (185).
DeLanda thinks the virtual/actual distinction can be made more tractable by mapping it onto concepts
found in the sciences of complexity and non-linear dynamics. The concept of “intensity” becomes
naturalized as a gradient or rate of change. “Ideas” are construed in mathematical terms as the
singular points or “singularities” (attractors, limit cycles, etc.), which represent the tendencies of
physical systems to follow certain families of paths through a space of possible states rather than
others (DeLanda 2004, 80-81). The distribution of intensities associated with these tendencies can be
represented as a vector field or “flow” associating each point in the system’s state space with an
instantaneous gradient.
Delanda’s interpretation thus provides the basis for a kind of Platonic materialism in which Ideas are
pre-individual conditions of possibility for individuals (DeLanda 2004, 80).
However, this is not a “microphysicalism” for which whirlwinds and cats are just the bundled
behaviours of fundamental physical entities. Like Deleuze, DeLanda wishes to allow for a creative
world in which causal interactions within systems composed of individuals can generate historically
novel or “emergent” property kinds. For example, the phenomena associated with life and minds are
generated by mechanisms whose components are not intrinsically alive or minded. Since Delanda’s
philosophical naturalism eschews transcendent organizing principles in favour of an ontology of
actual particulars and virtual tendencies these emergent layers of the real must be physically
explicable in these terms. DeLanda’s materialism must, in short, reconcile a “flat ontology” of
causally relatable individuals with an explanatory emergentist account of novel capacities and
properties (Deleuze 2006: 28; DeLanda 2004: 58). The
This is not an easy undertaking. Any materialist account of emergence must explain why putative
emergent properties like consciousness or state organization “upwardly-depend” or “supervene” 1 on
facts about their generative mechanisms (basal conditions). Without upward dependence, an emergent
property cannot be said to emerge from the behaviour of entities that do not possess that property. As
Jagewon Kim puts it: “If the connection between pain and its neural substrate were irregular,
haphazard, or coincidental, what reason could there be for saying that pain ‘emerges’ from that neural
condition rather than another?” (Kim 2006: 550).
However, classical emergentists like C. D. Broad typically characterized emergent properties as
recalcitrant to explanation or prediction, while retaining the supervenience condition (Ibid: 552). If
consciousness is classically emergent, its upward-dependence on the dynamics and structure of brains
and bodies is mysterious. There is no reason to prefer materialist emergentism to a dualist or pluralist
ontology for which non-physical properties are not generated by their basal conditions but depend on
them as matter of brute fact.
Philosophy and Simulation: the Emergence of Synthetic Reason attempts to square this circle by
showing how emergent properties can be explained without impugning the ontological novelty of
emergent kinds. To this end, DeLanda argues that the classical emergentist eschewal of explanation
was an avoidable cul-de-sac resulting from inadequate understanding of the partial role of mechanism
and deduction in scientific explanation.2 DeLanda claims that emergent properties are those whose
explanation requires two components neither of which suffices alone (DeLanda 2011: 13-15). The first
of these is the specification of a mechanism that causes the putative emergent behaviour (for example,
a system of chemical reactants far from equilibrium, or a population of individuals in a pre-state
society). The second corresponds to the Deleuzean Idea: the specification of singularities reflecting
that same system’s tendency to slip into distinctive portions of its state space.
DeLanda argues that we should go further than simply positing these singularities for descriptive or
predictive purposes and regard them as “real and efficient” shapers of the world; just like causes or
intensities (DeLanda 2011: 19). This ontological posit explains why physically disparate systems like
chemical clocks and convection cells converge in their emergent behaviour (Ibid: 17). Since
mechanism independence is ontological not epistemic there is no conflict between the autonomy of
emergent properties and the explicability of their dependence on basal conditions.
Computer simulations are structurally and physically different from the systems that they simulate.
But an adequate simulation of a system with emergent behaviour should exhibit qualitatively similar
behaviour. Thus, for DeLanda, successful computer simulation is evidence for the “autonomous
existence of topological singularities” that his emergentism requires (DeLanda 2011: 19). For
example, the simulation of a thunderstorm described in Chapter One (“The Storm in the Computer”)
does not emulate the behaviour of air and water molecules on the surface of an ocean but models the
convection-generating micro-interactions as an interface between ideal fluids (DeLanda 2011: 16).
Likewise functional differentiation in animal neural networks can be simulated without coding
intracellular flows of ions for each “software neuron”. This is because the learning processes which
partition these networks into representational units depend on mechanism independent principles such
as the “Hebb rule” relating synaptic strength to the frequency of joint stimulation (“neurons that fire
together, wire together”).3
Philosophy and Simulation follows a recursive structure. Each chapter describes emergent entities or
processes that are taken for granted as “fuel” for the higher scale “emergences” in the succeeding
chapter. Each neatly exemplifies the two-component model: specifying the components and
organization of some generative mechanism, then describing the singularities or tendencies revealed
by computational models of the system. The temperature gradients discussed in Chapters One and
Two aggregate the prebiotic molecules that form the raw material for the self-replicating
macromolecules discussed in Chapter Three. With the advent of replication we can add emergent
intensities such as fitness gradients. Likewise, the simple neural populations characteristic of insect
bodies provide the basis for mammalian and avian neural nets equipped for object recognition, scene
analysis and episodic memory (Chapters Six and Seven).
As a theory of emergence, DeLanda’s theory is interesting independently of its naturalistic gloss on
Deleuze is and is expounded here with all his trademark clarity and verve. However, it’s worth
considering a few of the problems that it confronts.
Motivating ontological commitment to mechanism-independent structure is crucial for DeLanda’s
emergentism for, as we have seen, this alone furnishes the autonomy of the emergent phenomenon.
But if this autonomy is ontological rather than epistemic or descriptive, the virtual must be an
ingredient of material reality. The behaviour of emergent systems must be micro-governed by
whatever generative mechanisms produce them yet autonomously macro-governed by their Ideal
singularities (DeLanda 2011: 19).
Suppose this is right. Let us consider the obvious charge that commitment to singularities is
incompatible with “flatness” because it just posits another bunch of transcendent entities (Ideas/
Multiplicities) to shape reality from the heights. DeLanda properly anticipates this objection:
Do they [singularities] exist, for example, as transcendent entities in a world beyond that of
matter and energy? Or are they immanent to the material world? If all the matter and energy
of the universe ceased to exist, would singularities also disappear (immanent) or would they
continue to exist (transcendent)? (DeLanda 2011: 19-20; see also 202)
He argues that they are immanent on the grounds of their formal irreducibility to and existential
dependence on instantiating mechanisms. Irreducibility follows because singularities can be studied
mathematically without assigning dimensions to their possibility spaces. Existential dependence on
matter/energy, on the other hand, follows from the fact that a singularity requires an actual gradient
(“any gradient”) to be actualized.
However, this argument is incomplete. The instantiation claim presupposes what is at issue while
irreducibility is, at best, necessary but not sufficient for immanence. A mathematical Platonist could
claim that singularities are determined by the properties of the objective mathematical structures to
which they belong (e.g. the topological attractor structure corresponding to a family of maps or
differential equations) insisting that these transcend the material systems whose behaviour is
isomorphic to them.
The Platonist would still need to explain why these structures have diverse physical isomorphs or
declare the fact of isomorphism brute. In the latter case, arguably, Delanda can respond that his
account is superior because it explains why emergent behaviour converges in disparate systems while
the Platonist’s does not. However, this claims stands or falls with the explanatory virtues of this
ontology.
This may not be problem for anti-naturalist Deleuzeans like James Williams, for whom belief in the
virtual is motivated by a transcendental deduction of the conditions of experience, not by the efficacy
of a class of scientific explanations (Williams 2006: 101). However, it is a challenge for a naturalist
like DeLanda. Just what does ontological commitment to singularities buy us that using them as
descriptive or predictive tools does not? The objection waiting in the wings is that while singularities
may be useful for describing the behaviour of complex systems, they don’t actively cause stuff to
happen. Intensities do cause stuff to happen but intensities are actual-real rather than virtual-real.
They are particulars (if not individuals).
This issue can be explored by considering the relation of DeLanda’s theory of emergence to the fertile
account of assemblages developed in A New Philosophy of Society. An assemblage such as an
organism or an economic system is an emergent but decomposable whole. Unlike a totality (which
holistically determines the natures of its “parts”) an assemblage’s parts can follow “deterritorialized”
careers. “Pulling out a live animal’s heart will surely kill it but the heart itself can be implanted into
another animal and resume its regular function” (DeLanda 2011: 184). Nonetheless, the emergent
properties of a given assemblage depend “on the actual exercise of the capacities of its parts”.
If this dependency is construed as supervenience (Note 1) then DeLanda’s ontology seems to confront
the “causal exclusion problem” for emergent properties anatomized by Kim. Suppose facts about
system W’s emergent properties supervene on facts about its micro-constituents p1, p2… pn. Any fact
P belonging to the supervenience base of W’s emergent facts will suffice for an emergent fact within
this base. Suppose also that a given emergent fact M of W suffices to cause a later emergent fact M*
by causing its basal condition P* (some state of p1, p2… pn in the supervenience base of M*). If P
suffices for M but not vice versa (upwards dependence) it seems counter-intuitive to claim that P
could not have caused P* on its own. So responsibility for inter-level causation between emergent
properties M, M* can be devolved onto their basal conditions “making the emergent property M
otiose and dispensable as a cause of P*” (Kim 2006: 558).
The causal exclusion argument does not directly threaten the ontology of the virtual. However, it
threatens the flat ontological assumption that assemblages have causal autonomy over and above their
microstructures. If so, then there are no assemblages and the Deleuzean virtual has little to explain, it
seems.
There are strategies by which one might de-fang the causal exclusion argument. Suppose
supervenience runs symmetrically from properties at higher to lower scales as well as from lower to
higher (Hüttemann 2004: 71). No change in emergent facts without changes in basal facts (upwards
supervenience) is compatible with no changes in basal facts without changes in emergent facts
(downwards supervenience). If asymmetric supervenience is what motivates causal exclusion then
symmetric supervenience undermines a key premise in the causal exclusion argument.4
However, it is not clear that DeLanda would want to commit to symmetrical supervenience because
he attaches great ontological and epistemic significance to the claim that emergent properties are
stable against significant micro-level differences. Science, he argues, is possible on the condition that
we can chunk stabilities at a given level without having to model all the way down (2011: 14).
A more congenial avoidance strategy might be furnished by an account of how wholes exercise “top
down” influence on the how the capacities of their components are actualized – assuming this can be
brought to bear on the virtual. While DeLanda has not, to my knowledge, discussed supervenience, he
is committed to the existence of top-down as well as bottom-up causality – a position he explicates in
terms of the distinction between properties and capacities (See, for example, DeLanda 2010b, 68-70).
The properties of a thing are necessarily actualized but the actualization of capacities is context-
sensitive (DeLanda 2011: 4).
For example, Chapter Eleven of Philosophy and Simulation considers the problem space for the
emergence of archaic states from simpler chiefdoms in which wealth and status differences were
disseminated in a less hierarchical manner. One explanation for the stratified forms found in complex
cheifdoms or proto-states is that the relaxation of incest prohibitions on marrying close relatives
would have allowed persistent concentrations of wealth and status – an explanation supported by
multi-agent simulation (DeLanda 2011: 172). So while an accretion of agricultural wealth has
capacities for distribution between or within lineages, there are critical parameters determining which
of these is actualized.
So parameterized constraints (like incest prohibition) or structural properties (like the presence or
absence of interconnections between parts of a mechanism) can activate manifestations of component-
capacities, explaining the dependence of component behaviour on the assemblages to which they
belong.
Is context sensitivity enough to motivate claims for higher-level autonomy?
The natural role of the virtual is in the specification of complex capacities; so this offers some hope.
However, both context sensitivity and behavioural novelty of the kind that supports emergentist
claims are exhibited in very simple cellular automata like John Conway’s Game of Life (Life) as
much as in real systems (DeLanda 2011: Chapter Two). Life is a two dimensional array of cells, each
of which can be “Alive” (On) or “Dead” (Off) at a given time step. The states of the cells are
determined by three simple rules:
1) A dead cell with exactly three live neighbors becomes alive on the next time step.
2) A live cell with two or three live neighbors stays alive.
3) In all other cases a cell dies or remains dead.
These rules pass for fundamental physics in the Life World by specifying context sensitive behaviour
at the micro-level of individual cells. Computer implementations of Life show that complex and
unpredictable transitions between cell-configurations can “emerge” over their successive iteration
(Bedau 1997). In all cases these involve higher-level structures activating the capacities of both
individual cells and the higher scale structures they compose (the only kind of downward causation
that DeLanda permits). Yet it is at least debatable whether surprisingness and complexity in Life
provide grounds for ontological or “strong” emergence. As Mark Bedau writes “There is no question
that every event and pattern of activity found in Life, no matter how extended in space and time and
no matter how complicated, is generated from the system’s microdynamic – the simple birth-death
rule” (Ibid: 381). But if downward causation in Life operates much as in physical reality the context
sensitivity of capacities in the actual world is just the way its microdynamic is expressed in particular
circumstances. In that case, the best theory of emergence on offer would resemble Bedau’s theory of
“weak emergence” which characterizes emergent properties epistemically as underivable by means
other than simulation (Ibid: 377-378).
Thus the efficacy of computer simulation alone does not support an emergentist doctrine stronger than
“weak emergence”. Either the fact of a fundamental microdynamic (and thus a fundamental science)
does not exclude higher-level forms of causation, or it needs to be shown that our world is
fundamentally unlike Life in lacking a microdynamic. It may be that the latter position can be
motivated by pure philosophical argument of the kind favoured by Williams or by some kind of
naturalistic argument from current physics (See Ladyman and Ross 2007). However, it’s not clear that
the latter position is compatible with materialism, as this is usually understood. Thus despite the
richness and philosophical ingenuity of this book it is clear that it leaves some basic metaphysical
questions unanswered. Given the philosophical power and scope of his output to date, it will be
fascinating to see how DeLanda addresses these in future work.
References
Bedau, Mark (1997), ‘Weak Emergence’, Philosophical Perspectives, 11, Mind, Causation, and
World, pp. 375-399.
Churchland, Paul (1986), Scientific Realism and the Plasticity of Mind (Cambridge: CUP).
DeLanda, M. (2004), Intensive Science and Virtual Philosophy, London: Continuum.
DeLanda, M. (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity,
London: Continuum.
DeLanda, M (2010a), 'Emergence, Causality and Realism', in Levi Bryant, Nick Srnicek and Graham
Harman (eds), The Speculative Turn: Continental Materialism and Realism (Melbourne: Re.press).
DeLanda, Manuel (2010b), Deleuze: History and Science, Atropos Press.
DeLanda, Manuel (2011), Philosophy and Simulation: The Emergence of Synthetic Reason, London:
Continuum, 226 pp.
Deleuze, G. (1994), Difference and Repetition, Paul Patton (trans.), London Athlone Press.
Dennett, Daniel (1995), Darwin’s Dangerous Idea, London Penguin
Fodor, Jerry A. (1990), A Theory of Content and Other Essays. MIT Press.
Gaffney, P (2010), ‘The Metaphysics of Science: An Interview With Manuel DeLanda’, in Gaffney
(ed.) The Force of the Virtual, Minneapolis, University of Minnesota Press.
Hüttemann, Andreas (2004), What’s Wrong with Microphysicalism? London: Routledge.
Kim, Jagewon (1984), ‘Concepts of Supervenience’. Philosophy and Phenomenological Research
XLV(2), 153–176.
Kim, Jagewon (2006), ‘Emergence: Core Ideas and Issues’, Synthese 151(3), 547-559.
Ladyman James, Ross Don, (2007), Every Thing Must Go: Metaphysics Naturalized, Oxford: Oxford
University Press.
Lord, B (2008), ‘The Virtual and the Ether: Transcendental Empiricism in Kant’s Opus Postumum’,
Journal of the British Society for Phenomenology 39(2), 147-166.
Meillassoux, Q. (2006) After Finitude: An Essay on the Necessity of Contingency, Ray
Brassier (trans.), New York: Continuum.
Roden, David (2006), ‘Naturalising Deconstruction’, Continental Philosophy Review 38, 71-88.
Williams, James (2006), ‘Science and Dialectics in the Philosophies of Deleuze, Bachelard and
Delanda’, Paragraph 29(2), 98-114.
1 The notion of supervenience is used by non-reductive materialists to express the dependence of mental
properties on physical properties without entailing their reducibility to the latter. Informally: M properties
supervene on P properties if a thing’s P properties determine its M properties. If aesthetic properties supervene
on physical properties, if x is physically identical to y and x is beautiful, y must be beautiful. Supervenience
accounts vary with the force of the entailments involved. Weak supervenience necessitates that no two things
can differ in M properties without differing in P properties but does not require that the same determination
relation hold necessarily (across all possible worlds). Strong supervenience necessitates that having the same P
properties entail having the same M properties. Thus if aesthetic properties strongly supervene on physical
properties no two things in any possible world can have the same physical properties where one is beautiful and
the other not. See Kim (1984).
2 It should be emphasized that DeLanda far from being alone among contemporary philosophers in arguing for
the compatibility of emergence and mechanistic explanation.
3 While the chemistry underlying cellular behaviour is complex, Hebbian learning can be programmed as a
simple recurrence equation that relates the increase of an interneuron weight on a subsequent time step to the
joint activations of the two neurons at an earlier time step:
∆jk(t+1) = µxj(t)xk(t)
Where µ is a scaling constant, ∆jk(t+1) the change in the weight from time step t to t+1 connecting the two
neurons j and k and µxj(t) and xk(t) are the activation levels at t.
4 Theremay, of course, be ways of casting the causal exclusion argument that do not hinge on asymmetric
supervenience.