0% found this document useful (0 votes)
150 views19 pages

Haugelands Understanding On Artificial Intelligen

This article analyzes John Haugeland's work on natural language understanding in the context of contemporary debates surrounding artificial intelligence and large language models (LLMs). It argues that genuine understanding requires a capacity for care and responsibility, which Haugeland refers to as 'giving a damn,' and critiques additive theories of understanding in favor of a transformative approach. The paper situates Haugeland's existential holism as a critique of traditional views on AI's understanding, emphasizing the ontological implications of these discussions.

Uploaded by

rmancill
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views19 pages

Haugelands Understanding On Artificial Intelligen

This article analyzes John Haugeland's work on natural language understanding in the context of contemporary debates surrounding artificial intelligence and large language models (LLMs). It argues that genuine understanding requires a capacity for care and responsibility, which Haugeland refers to as 'giving a damn,' and critiques additive theories of understanding in favor of a transformative approach. The paper situates Haugeland's existential holism as a critique of traditional views on AI's understanding, emphasizing the ontological implications of these discussions.

Uploaded by

rmancill
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Continental Philosophy Review (2025) 58:99–116

https://doi.org/10.1007/s11007-024-09671-1

Haugeland’s understanding: on artificial intelligence


and existential ontology

Joseph Lemelin1

Accepted: 27 December 2024 / Published online: 15 March 2025


© The Author(s), under exclusive licence to Springer Nature B.V. 2025

Abstract
This article revisits John Haugeland’s early work on natural language understand-
ing to address contemporary debates about large language models and their capacity
for genuine understanding. Through a reinterpretation of Haugeland’s essay “Under-
standing Natural Language” via key notions in the thought of Martin Heidegger, the
article argues that world-disclosing care and the capacity for taking responsibility—
what Haugeland calls “giving a damn”—are the conditions of possibility for under-
standing. By contrasting additive and transformative approaches to understanding,
the paper highlights the ontological stakes underpinning contemporary debates
about understanding in AI. It concludes by situating the framework Haugeland calls
“existential holism” as an overall critique of additive theories.

Keywords Haugeland · Heidegger · Large language models · Artificial intelligence ·


Understanding

1 Introduction

There was a time when some philosophers were certain about what computers
can and can’t do. Now the situation is not so clear.1 The recent boom of artificial
intelligence (AI) has exceeded the expectations of many, boosters and critics alike,
prompting fresh philosophical reflection, critique, and acrimony. At the center of
controversy are large-language models (LLMs) that now regularly appear as if they
are competent language users: LLMs generate written work often indistinguish-
able from that composed by humans and can achieve high scores on exams involv-
ing complex reasoning and interpretive skills. Amidst the present AI technoscape,
questions naturally arise about whether LLMs actually understand their linguistic

1
Here I am referring primarily, but not exclusively, to Hubert Dreyfus’s infamous critique in What Com-
puters Still Can’t Do (Cambridge, MA: MIT Press, 1992).

* Joseph Lemelin
[email protected]
1
Stony Brook University, Stony Brook, New York, USA

Vol.:(0123456789)

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


100 J. Lemelin

activity and the world it represents. In this essay, I revisit John Haugeland’s early
work on natural language understanding to open an underlying ontological dimen-
sion to questions about AI’s capacity for understanding.2
LLMs are AI systems that have been trained on billions (sometimes even trillions)
of tokens (i.e., characters, words, and parts of words) to automatically compose texts
in natural language. They belong to a wave of deep neural networks that learn to
track correlations among words, phrases, and sentences, generating essays and hold-
ing conversations via prediction methods driven by complex statistical modeling.
The predictive power of LLMs stems from their vast training sets and enormous
number of parameters (again, billions or trillions). LLM-based applications such as
ChatGPT and Claude are increasingly becoming aspects of everyday life, and recent
developments have shown that their performance quality increases as models scale.
Traditionally, there are two types of positions that tend to emerge when assess-
ing AI’s ability to engage in a humanlike cognitive activity such as understanding.
One type affirms that a system’s behavior is all that is required for judging whether
it is exercising some capacity or not—the proof is in the outputting. Views under
this category claim that if AI produces coherent, novel responses to conversational
prompts, then it really does understand the meaning of its inputs and outputs. Clas-
sic expressions of this type of position can be found in Alan Turing’s formulation
of the imitation game and Daniel Dennett’s articulation of the intentional stance.3
Another type of position sees outputs as necessary but insufficient grounds for deter-
mining whether AI really is engaging in some activity, even if the system’s behavior
appears functionally equivalent to that of a cognitively competent agent. Views in
this vein claim, in one way or another, that AI does not actually understand what it is
doing despite behavioral outputs that seem to indicate otherwise. John Searle’s infa-
mous Chinese Room Gedankenexperiment is an example of this type of position, but
a variety of different arguments fall under this category as well.4
Contemporary debates about LLMs’ capacity for understanding follow suit.5 On
one side are those who argue that LLMs’ ability to engage in humanlike conversa-
tions, apply concepts correctly, and develop a common-sense physics is a genuine
expression of understanding. They are skeptical about whether there is a meaningful
distinction between “real understanding” and “fake understanding,” affirming that

2
John Haugeland, “Understanding Natural Language,” in Having Thought: Essays in the Metaphysics
of Mind (Cambridge, MA: Harvard University Press, 1998), 46–61; originally published in Journal of
Philosophy 76 (1979): 619–632.
3
Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433–460; Daniel Dennett,
The Intentional Stance (Cambridge, MA: MIT Press, 1987).
4
John Searle, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3 (1980): 417–457; Drey-
fus’s response to Deep Blue’s chess victory over Garry Kasparov is also a case of this kind of view.
See Dreyfus’s correspondence with Dennett, “Did Deep Blue’s Win Over Kasparov Prove That Artificial
Intelligence Has Succeeded?,” in Mechanical Bodies, Computational Minds: Artificial Intelligence from
Automata to Cyborgs, ed. Stefano Franchi and Güven Güzeldere (Cambridge, MA: MIT Press, 2005),
265–279.
5
For a more detailed overview of the literature, see Melanie Mitchell and David C. Krakauer, “The
Debate Over Understanding in AI’s Large Language Models,” Proceedings of the National Academy of
Sciences 120 (2023): e2215907120.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 101

reaching humanlike competency across domains—and even consciousness—is just a


matter of scaling-up the systems.6 On the other side are those who argue that LLMs’
lack of sociality7 or merely formal linguistic facility8 preclude actual understanding,
warning against consequences of anthropomorphizing LLMs’ behavior.9 And some
on this latter side further argue that LLMs are more like compressed archives, librar-
ies, or encyclopedias than embodied cognitive agents who can understand.10
Views on both sides are often what I call “additive.”11 Some tend to assume that
cognitive powers like understanding can be achieved by scaling-up the networks and
datasets—that is, by adding more parameters to networks and examples to training
sets. Similar is the belief that the capacity for understanding can be achieved by
adding modules together to create ever more complex multimodal systems. Further,
another sort of prevalent additive approach assumes that cognitive powers exist on
a continuum. Here the idea is that meeting higher benchmarks means taking steps
toward “general” intelligence.12 Lastly, views on the second side are additive if they
posit that what’s needed to realize understanding is just some special feature placed
atop underlying machinery otherwise remaining indistinguishable from that in sys-
tems that do not understand.
Haugeland’s account is particularly relevant for responding to standard additive
positions. His basic idea is as follows: Giving a damn is what makes the differ-
ence between having the capacity for understanding and lacking it, and as he quips,
“the trouble with artificial intelligence is that computers don’t give a damn.”13 By

6
Blaise Agüera y Arcas, “Do Large Language Models Understand Us?,” Daedalus 151 (2022): 183–
197. For a contemporary application of the intentional stance to LLMs, see Harvey Lederman and Kyle
Mahowald, “Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel
Reference Problem, and the Attitudes of LLMs,” Transactions of the Association for Computational Lin-
guistics 12 (2024): 1087–1103.
7
Jacob Browning, “Personhood and AI: Why Large Language Models Don’t Understand Us,” AI &
Society 39 (2024): 2499–2506.
8
Emily M. Bender and Alexander Koller, “Climbing Towards NLU: On Meaning, Form, and Under-
standing in the Age of Data,” Proceedings of the 58th Annual Meeting of the Association for Computa-
tional Linguistics, ed. Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel Tetreault (Stroudsburg, PA:
Association for Computational Linguistics, 2020), 5185–5198.
9
Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Strauss,
and Giroux, 2019); Murray Shanahan, “Talking about Large Language Models,” Communications of the
ACM 67 (2024): 68–79.
10
Ted Chiang, “ChatGPT is a Blurry JPEG of the Web,” The New Yorker, February 9, 2023, https://​
www.​newyo​rker.​com/​tech/​annals-​of-​techn​ology/​chatg​pt-​is-a-​blurry-​jpeg-​of-​the-​web; Eunice Yiu, Eliza
Kosoy, and Alison Gopnik, “Transmission Versus Truth, Imitation Versus Innovation: What Children
Can Do That Large Language and Language-and-Vision Models Cannot (Yet),” Perspectives on Psycho-
logical Science 19 (2024): 874–883; cf. Lederman and Mahowald, “Are Language Models More Like
Libraries or Like Librarians?”.
11
As I explain below, I adapt the notions of “additive” and “transformative” from Matthew Boyle,
“Additive Theories of Rationality: A Critique,” European Journal of Philosophy 24 (2016): 527–555.
12
Extending Dreyfus’s idea of “first-step fallacy,” Melanie Mitchell identifies this as the first fallacy of
AI. See Melanie Mitchell, “Why AI Is Harder than We Think,” in Mind Design III: Philosophy, Psychol-
ogy, and Artificial Intelligence, ed. John Haugeland, Carl F. Craver, and Colin Klein (Cambridge, MA:
MIT Press, 2023), 175–187 and Hubert Dreyfus, “A History of First-Step Fallacies,” Minds & Machines
22 (2012): 87–99.
13
Haugeland, “Understanding,” 47.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


102 J. Lemelin

Haugeland’s lights, then, even contemporary AI just doesn’t understand. His contri-
bution lies in raising the question of what it means to have a world and showing that
debates about AI’s capacity for understanding are as much an ontological issue as a
cognitive one: understanding and having a world are, as he would say, intelligible
only as two sides of the same coin.14 What matters for the plausibility of Hauge-
land’s idea is spelling out whatever “giving a damn” means. Nonetheless, he offers
only an abbreviated outline of his key notions in his discussion of AI and natural
language, leaving them in need of further articulation. I argue that reading Hauge-
land’s early essay “Understanding Natural Language” alongside his later interpreta-
tion of Heidegger on death and finitude allows us to make those notions explicit.
Doing so uncovers a temporal aspect to what he calls “existential holism” that is
crucial for grasping his view about understanding.15 In order to make heads or tails
of existential holism, we must take it as the co-constitutive integration of three other
holisms Haugeland discusses in the essay: holism of intentional interpretation,
common-sense holism, and situation holism. Hermeneutics—which he seemingly
dismisses in his discussion—turns out to be central because existential holism has
the shape of a hermeneutic circle with an ontological spin. To conceive the holisms
as additive, which is the position Haugeland’s account initially appears to endorse,
would be to take them as stipulating different, sequential benchmarks to be achieved,
one after the other, until those achievements add up to full-blown understanding. In
contrast, I argue that existential holism identifies the transformative structure that is
the condition for the possibility of understanding in any intelligent system. On my
reading, the three holisms turn out to be three moments in the fundamental structure
Heidegger calls care [Sorge], the being of Dasein. I define what I mean by trans-
formative, in opposition to additive, in the course of my account below. Although
Haugeland prepares the way for a critique of additive positions, he neglects to make
explicit that what is fundamentally at issue in transformative “giving a damn”—that
is to say, in caring—is time. And so, re-enlivening Haugeland’s account calls for
placing it the context of existential-hermeneutic temporality that he only gestures
toward in his original essay.
While understanding [Verstand] is a term of art for Heidegger,16 Haugeland
deploys the term in a more general way. The task of this essay is to articulate the
conditions for its possibility. In what follows, I first sketch the movement of Hauge-
land’s “Understanding Natural Language,” explaining holism of intentional interpre-
tation, common-sense holism, and situation holism each in turn. I then offer an inter-
pretation of existential holism in the context of “Truth and Finitude,” Haugeland’s
reading of Heidegger’s transcendental existentialism. Finally, I conclude with sug-
gestions aiming to carry forward the spirit, if not the letter, of Haugeland’s thinking.

14
John Haugeland, “Toward a New Existentialism,” in Having Thought: Essays in the Metaphysics of
Mind (Cambridge, MA: Harvard University Press, 1998), 6.
15
John Haugeland, “Truth and Finitude: Heidegger’s Transcendental Existentialism,” in Dasein Dis-
closed: John Haugeland’s Heidegger, ed. Joseph Rouse (Cambridge, MA: Harvard University Press,
2013), 187–220.
16
See Martin Heidegger, Being and Time, trans. John Macquarrie and Edward Robinson (New York:
Harper & Row, 1962), §§V.31–2ff.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 103

2 Haugeland’s three holisms

Haugeland’s initial claim in “Understanding Natural Language” is seemingly uncon-


troversial: making sense of textual excerpts calls for placing the text within some
larger, relational whole. The question then is what conditions this larger, relational
whole must meet in order to yield the possibility for understanding. In the course of
the essay, Haugeland walks us through four different kinds of holism at work behind
the scenes of a system (artificial or natural) that might reasonably be said to under-
stand an excerpt of natural language: holism of intentional interpretation, common-
sense holism, situation holism, and existential holism. In his original essay, Hauge-
land is responding to attempts in classical, logic-based AI that proved to be brittle
and largely incapable of succeeding at the kinds of natural-language challenges he
outlines—the kind of systems he calls Good Old-Fashioned Artificial Intelligence
(GOFAI).17 I aim to show that his treatment holds a fortiori in the context of con-
temporary statistically-based predictive systems that do meet those challenges. In
this section I explain what he means by each of the first three holisms.

2.1 Holism of intentional interpretation

Holism of intentional interpretation is the kind of holism that holds of any consist-
ent system of rules or norms. The idea here is that within a closed system of rules,
those rules determine whether behavior is legal or illegal, sensible or irrational. It
is on the basis of the rules qua consistent pattern that one can then discriminate x
from y. In other words, holism of intentional interpretation offers the insight that the
rules precede what makes sense within a given framework: in order for one to judge
any one instance of behavior as this or that within the framework, one must appeal
to the whole set of rules. As Haugeland notes, this is a minimalistic type of holism
compatible with, for instance, semantic atomism committed only to the principle
that “the meaning of a sentence is determined by the meanings of its meaningful
components, plus their mode of composition.”18
Haugeland uses the familiar example of chess to illustrate holism of intentional
interpretation.19 How would one defend the claim that some alien system (e.g., a
computer) is actually playing chess as opposed to randomly moving pieces indis-
criminately across a board? Haugeland offers three minimal conditions: (1) Give
systematic criteria for identifying the object’s inputs and outputs. In other words,
one must be able to identify the object’s behavior in a consistent and coherent way,
from start to finish, even if what happens in between remains obscure. (2) Have a
reliable way of interpreting those behaviors as moves of a certain type. For instance,

17
John Haugeland, Artificial Intelligence: The Very Idea (Cambridge, MA: MIT Press, 1985), 112.
GOFAI has since become a commonly-used term in AI discourse to refer to classical, symbolic systems.
18
Haugeland, “Understanding,” 49.
19
For a history of chess as a “model organism” in AI research, see Nathan Ensmenger, “Is Chess the
Drosophila of Artificial Intelligence? A Social History of an Algorithm,” Social Studies of Science 42
(2012): 5–30.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


104 J. Lemelin

a comprehensive manual for translating the unfamiliar outputs of the system into
familiar chess moves. (3) Tell a skeptic to sit down and engage the system in a game.
The skeptic will then have a front row seat for observing whether the alien system
is in fact really playing chess. Over the duration of the game, the skeptic has ample
opportunity to judge whether the input–output behavior of the system, as interpreted
in each case, makes sense, is legal or illegal, and perhaps even presents skillful and
elegant dexterity.
As what structures sensible behavior within a domain, the set of rules are given
in advance, must be appealed to as a whole, and freeze the range of meaningful
activity that can emerge within the overall framework. The skeptic’s judgment about
the sensibility of the system’s behavior on any one occasion must consider the sys-
tem’s prior inputs (e.g., board positions in a chess game) and outputs (actual moves
made during gameplay). For this reason, Haugeland also calls holism of intentional
interpretation a “prior holism.” Here’s how he explains the point: “A chess move is
legal and plausible only relative to the board position, which is itself just the result
of the previous moves. So one output can be construed sensibly as a certain queen
move, only if that other was a certain knight move, still another a certain bishop
move, and so on.”20 The way the whole is involved in making sense of each move
also has a metaphysical implication, for the set of rules that govern a domain fix
the being of the entities within that domain. Chess is a paradigmatic example of
how and why this is the case: what makes a rook a rook and bishop a bishop is not
any outward shape or material constitution, but the stable set of rules that mark off
the chess-world as a meaningful field populated by characteristic entities in distinc-
tion from other fields and entities. Nearly anything—bottle caps, porcelain figurines,
even mental images—can serve as chess pieces as long as the rules are in place and
the same things reliably serve as the same chess pieces during gameplay. Thus, any
particular attribution of meaningful behavior within the chess-world domain already
implicitly appeals to the entire system of rules that bestow being upon entities and
the game-player’s sequence of moves. In effect, by playing chess with the unfamiliar
system, the skeptical observer is assessing whether it has the ability to cope and get
along with entities of a certain sort.
The holism of intentional interpretation stipulates minimal conditions for holism
and is general enough to apply to the structural features of any formal system. More
specifically, Haugeland notes that it equally underlies what Quine calls radical
translation, and we can add that it is also behind what Dennett calls the intentional
stance.21 This kind of holism requires only that the criteria for judging outputs as
sensible be provided by a pre-given system of rules and a demonstrated pattern of
behavior acting in accordance with those rules. While Haugeland uses the example
of chess-playing to explain the holism of intentional interpretation, we can apply the
same set of criteria for evaluating the sensibility of contemporary LLMs’ behavior

20
Haugeland, “Understanding,” 48.
21
See W. V. O. Quine, Word and Object (Cambridge, MA: Harvard University Press, 1960), chap. 2;
Dennett, The Intentional Stance, chap. 2.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 105

roughly along the lines of a Turing test: as long as our unknown interlocutor reli-
ably produces outputs (answers and replies) that make sense in responses to inputs
(articulated questions and prompts), what grounds could we possibly have to say
that the system we are engaging with is not demonstrating understanding? Hauge-
land states that he outlines holism of intentional interpretation only to differentiate it
from the others, but as I demonstrate below, it actually serves as an integral moment
of existential holism.

2.2 Common‑sense holism

Whereas the first kind of holism that Haugeland discusses principally concerns a
static system outlined in advance, the second kind of holism targets the whole
dynamic of implicit background knowledge that one brings to nearly any natu-
ral language conversation. The issue of background knowledge is a holistic issue
because “the whole of common sense is potentially relevant at any point” in the
conversation.22 Difficulties in getting computers to deal with ambiguity of meaning
and rapidity of worldly changes that matter for understanding everyday situations
are variously deemed the common-sense knowledge problem or the frame problem.
Computers’ seeming inability to manifest common sense is an issue that has preoc-
cupied engineers and critics alike.23 Researchers Gary Marcus and Ernest Davis—
long interested in the common-sense knowledge problem—describe the issue as
follows:
If you see a six-foot-tall person holding a two-foot-tall person in his arms, and
you are told they are father and son, you do not have to ask which is which. If
you need to make a salad for dinner and are out of lettuce, you do not waste
time considering improvising by taking a shirt [out] of the closet and cutting it
up. If you read the text, ‘I stuck a pin in a carrot; when I pulled the pin out, it
had a hole,’ you need not consider the possibility ‘it’ refers to the pin.24
Common sense of this sort underlies countless everyday natural-language
exchanges that pass without incident or reflection. Common sense is the sort of
knowledge that, by its nature, is left unarticulated and is made to be forgotten. AI
critics like Hubert Dreyfus were keen to point out that the enterprise of compre-
hensively cataloging bits of everyday knowledge and devising exhaustive rules for
how they relate is doomed from the outset, given that common-sense knowledge is
always in flux and implicit, recalcitrant in the face of precise systematic articulation.

22
Haugeland, “Understanding,” 49.
23
Common sense has presented challenges for AI since the field’s inception. See, for instance, John
McCarthy, “Programs with Common Sense,” in Proceedings of the Symposium on the Mechanization of
Thought Processes (London: HMSO, 1959), 75–91.
24
Ernest Davis and Gary Marcus, “Commonsense Reasoning and Commonsense Knowledge in Arti-
ficial Intelligence,” Communications of the ACM 58 (2015): 92–93. See also an opinion piece in which
they summarize their views on LLMs, “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea
What It’s Talking About,” MIT Technology Review, August 22, 2020, https://​www.​techn​ology​review.​
com/​2020/​08/​22/​10075​39/​gpt3-​openai-​langu​age-​gener​ator-​artif​i cial-​intel​ligen​ce-​ai-​opini​on/.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


106 J. Lemelin

Haugeland’s point about common-sense knowledge is that it cannot be derived


from grammar and a dictionary alone. He cites examples of sentences that appear
straightforward but are actually rife with potential misunderstandings upon further
reflection. Here are two:
I left my raincoat in the bathtub, because it was still wet.
Though his blouse draped stylishly, the toreador’s pants seemed painted on.25
The first is a case of an ambiguous antecedent, the second of metaphor. In order
to make sense of these sentences, one must know that raincoats are the kinds of
things that get hung out to dry and that wearing baggy pants could prove treacherous
in a bullfight. These bits of knowledge are filled-in by one’s own history, set of back-
ground assumptions, experiential physics, and distinctive cultural embeddedness.
Further, identifying which bits of background knowledge matter for understanding a
particular sentence or word appearing in that sentence is just as important as exclud-
ing those that don’t. For instance, does the raincoat’s color matter for grasping the
meaning of the first sentence? What about its position in the bathtub? As the exam-
ple of the ambiguous antecedent demonstrates, appealing to the formal, syntacti-
cal properties of the sentence alone are not sufficient for parsing its meaning. The
point is that in any act of textual interpretation, one appeals to an indefinite range of
implicit knowledge. Dynamic background knowledge is always in a process of being
rearranged according to how one finds oneself amidst a particular state of affairs.
While holism of intentional interpretation opens a domain of fixed entities, com-
mon-sense holism points to a web of significance that is always already subject to
reorganization. Addressing the common-sense knowledge problem then amounts to
the attempt to get computers to orient themselves within an ever-changing semantic
field. Haugeland puts the point as follows: “Much of what we recognize as making
sense is not about some topic for which we have a word or idiom, but rather about
some (possibly unique) circumstance or episode, which a longer fragment leads us
to visualize.”26 Accordingly, common-sense holism has a temporal register to it:
“Common-sense holism is real-time holism—it is freshly relevant to each new sen-
tence, and it can never be ignored.”27 Haugeland notes that the “real-time” aspect
is what principally distinguishes common-sense holism from the “prior” holism of
intentional interpretation. Insofar as the latter is about fixed rules in a given domain
to which things must conform, it principally concerns presence. That is, holism of
intentional interpretation makes possible a field of meaning exemplified through an
unchanging present.
Common-sense holism’s real-time aspect, by contrast, is about being propelled
into particular instances within an ever-changing global context. As I see it, the
“prior” of prior holism refers to logical order, but the kind of whole that emerges is
intelligible in and as the temporal dimension of the present. On the other hand, com-
mon-sense holism concerns the cognitive achievement of making sense of things

25
Haugeland, “Understanding,” 49.
26
Ibid., 51.
27
Ibid., 49.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 107

amidst an endless stream of inherited nows. In this case, intelligibility emerges from
out of a past.
Contemporary LLMs, for the most part, are proficient at dealing with ambigu-
ity of the sort that proved to be a stumbling block for GOFAI. At the time of his
writing, Haugeland had in mind expert systems such as CYC that frequently com-
mitted humorous blunders of common-sense reasoning.28 The manifest limitations
of expert systems in the history of AI provide plausibility to Dreyfus’s claim that
common-sense is not the kind of thing that can be captured in GOFAI-style discrete
representations structured as a massive cross-referenced encyclopedia. LLMs, by
contrast, demonstrate a remarkable sensitivity to context. For instance, when asked
to parse the raincoat sentence above, ChatGPT is quick to affirm that the raincoat
is wet, not the tub. Why? Here’s a response: “If someone were to argue that the
bathtub is ‘still wet,’ the sentence would become confusing or poorly constructed,
as it wouldn’t provide a clear link between the bathtub’s wetness and the decision
to leave the raincoat there.” AI responses such as these ought to satisfy not only the
intentional skeptic, but also a common-sense skeptic seeking to put the system’s lan-
guage skills to the test. Indeed, AI researchers (including Gary Marcus and Ernest
Davis) who formerly facilitated the field’s leading examination of common-sense
reasoning, the Winograd Schema Challenge, now concede that LLMs have over-
come the challenge.29 According to the standards Haugeland here outlines as well as
those of the field at large, contemporary LLMs’ nimble ability to deal with ambigu-
ity and linguistic caprice at times appears indistinguishable from that of humans’
“common sense.”

2.3 Situation holism

Haugeland’s third type of holism, situation holism, expands upon the second. Situ-
ation holism describes interpretive cases where a holistic appeal to implicit states
of affairs is needed in order to make sense of a passage. The interpretive problem
here is that the situation itself, and one’s relation to it, is ambiguous and calls on the
reader to think ahead or recall past episodes. While common-sense holism is pri-
marily about how a holistic framework of inherited background knowledge allows
for coping with ambiguity and vagueness in particular sentences, situation holism
poses the challenge of understanding how the same sentence can mean different

28
For a description of CYC by its primary architect, see Douglas Lenat, “CYC: A Large-Scale Invest-
ment in Knowledge Infrastructure,” Communications of the ACM 38 (1995): 33–38. For a philosophical
assessment of CYC, see Dreyfus, “Introduction to the MIT Press Edition,” in What Computers Still Can’t
Do, xvii–xxx. While CYC hadn’t yet been developed when Haugeland wrote the first version of his essay
in 1979, CYC later became, for critics, a paradigmatic case of quixotic GOFAI attempts to crack com-
mon sense. The technical AI details that Haugeland discusses in this section are consistent with those
that underly CYC.
29
Vid Kocijan et al., “The Defeat of the Winograd Schema Challenge,” Artificial Intelligence 325
(2023): 103971. See also this response that questions whether the Winograd Schema Challenge tests
common sense at all: Jacob Browning and Yann LeCun, “Language, Common Sense, and the Winograd
Schema Challenge,” Artificial Intelligence 325 (2023): 104031.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


108 J. Lemelin

things relative to one’s anticipations and projections of absent states of affairs and
ways of being. The kind of projection at work in situation holism requires the reader
to consider possibilities and weigh counterfactuals. Haugeland calls the kind of fac-
ulty at work in this kind of holism “situation sense.”
Insofar as situation holism demands of the reader that she project possibilities in
order to make sense of this or that passage, situation sense is a matter of thinking
beyond what is present in an attempt to reveal what is obscured. Haugeland explains
the capacity of situation sense via the kind of storytelling involved in detective nar-
ratives and mysteries: “Mystery novels, for example, are built around the challenge
of situation holism when pivotal cues are deliberately scattered and ambiguous. [...]
Only the over-all plot determines just which words need to be handled carefully,
not to mention how to handle them.”30 While situation sense is needed for under-
standing almost any story, “whodunit” mysteries serve as exemplars insofar as they
continuously place the reader in unknowing positions. When making sense of a par-
ticular episode (past or present) within the story, one anticipates possible scenarios,
interpreting the episode accordingly. In these cases, the interpreter makes sense of
occurrences in light of a range of possibilities, and situations take on different looks
depending on which avenues are affirmed. In such cases, the overall background
state of affairs act as a hidden modal operator that structures what appears possible,
necessary, impossible, or contingent on any one occasion.
As we’ve seen, the first two holisms Haugeland describes each have a temporal
dimension to them: holism of intentional interpretation is about conforming to rules
in their presentness and common-sense holism articulates the real-time coping with
the past. Given that situation sense is an ability to project possibilities and anticipate
outcomes, situation holism evinces a future-oriented dimension. Projection opens up
intelligibility as it works behind the scenes to direct one’s focus. That is to say, certain
features, objects, and episodes take on significance in respect to the future outcomes
that one anticipates. Situation sense marks the way a reader reads ahead of herself, as
it were, in interpretive acts. In contemporary AI parlance, engineering techniques that
target situation sense would be attention mechanisms that allow models to identify
contextual relationships among words and deal with long-term dependences.
Today’s LLMs appear to satisfy all the conditions that Haugeland outlines in
his discussion of the three holisms. They reliably parse language, display sensitiv-
ity to context, cope with scenarios requiring common sense, and meet the interpre-
tive challenges Haugeland offers. Evidence of their understanding-like competence
are the many popular controversies about the role of generative AI in classrooms
and workplaces. In many respects, there now appears to be a functional equivalence
between human and machinic understanding of natural language. Of course, LLMs
occasionally make humorous mistakes in their reasoning, purvey bullshit, and “hal-
lucinate” their response outputs, but so do humans.31 LLMs have their snags, but the
dizzying AI developments of recent years ought to humble anyone in the business of

30
Haugeland, “Understanding,” 54.
31
Agüera y Arcas, “Large Language Models,” 195; see also Browning, “Personhood and AI”.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 109

making principled arguments about what computers can’t do.32 So, what are we to
make of understanding in light of this new state of affairs?

3 Temporality and transformation


In this section I offer a reinterpretation of the above three holisms, uncovering their
latent ontological significance through a reading of Haugeland’s late engagement
with Heidegger’s existential hermeneutics. I outline the role of temporality in these
holistic structures and argue that Haugeland’s distinctive existentialist ontology ori-
ents understanding as a transformative way of being rather than an additive faculty.

3.1 Existential holism and hermeneutics

Haugeland’s insight emerges with the last type of holism he discusses, existential
holism. Recall his opening proclamation discussed above: The problem with AI
is that computers don’t give a damn. Haugeland’s discussion of existential holism
attempts to unfold what it is to “give a damn,”33 and existential holism itself aligns
with what Heidegger calls being-in-the-world. Haugeland’s aim in the essay is to
argue that there are identifiable conditions for achieving the kind of intelligence that
characterizes the way of being of humans, and his identification amounts to “giving
a damn,” or caring:
Only a being that cares about who it is, as some sort of enduring whole, can
care about guilt or folly, self-respect or achievement, life or death. And only
such a being can read. This holism, now not even apparently in the text but
manifestly in the reader, I call (with due trepidation) existential holism. It is
essential, I submit, to understanding the meaning of any text that (in a familiar
sense) has any meaning.34
I take Haugeland’s strong, controversial claim to be that caring as a way of
being is the condition for understanding. Despite existential holism’s centrality to
the argument, Haugeland’s explication of what it involves remains thin, leaving the
impression that “giving a damn” is to be taken as the ability to read oneself into a
situation in a way that doesn’t reach much further than situation sense. Additionally,
in a “digression” prior to introducing his discussion of existential holism, Haugeland
lightly dismisses the significance of philosophical hermeneutics as merely a “sophis-
ticated combination” of the first three holisms he discusses.35 I argue, by contrast,
that hermeneutics is crucial for making sense of existential holism and, thus, for

32
As Yuk Hui suggests, talk of principled limits on AI is perhaps itself an expression of a limited imagi-
nation with regard to the larger role and arrangement of humans among technical systems (Yuk Hui, “On
the Limit of Artificial Intelligence,” Philosophy Today 65 [2012]: 339–357).
33
This phrase is so characteristic of Haugeland’s style and thinking that it serves as the title for an edited
volume about his work (Zed Adams and Jacob Browning, eds., Giving a Damn: Essays in Dialogue with
John Haugeland [Cambridge, MA: MIT Press, 2017]).
34
Haugeland, “Understanding,” 59.
35
Ibid., 54–55.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


110 J. Lemelin

Haugeland’s account of understanding. So, what does this mean in the context of
contemporary AI? On Haugeland’s view, if LLMs are not characterized by a caring
way of being then their sophisticated textual processing just doesn’t count as reading
in the first place and active understanding is not an open possibility for them.
The central unstated idea behind Haugeland’s account of understanding, I con-
tend, is the “hermeneutic circle” in its distinctive Heideggerian rendition. Generally
speaking, this is the idea that any instance of understanding involves a dynamic, cir-
cular relationship between a whole and its parts. To understand the whole, one must
understand its parts. To understand the parts, one must grasp the whole. However,
as Heidegger articulates in §32 of Being and Time, the hermeneutic circle is not
just about textual interpretation but reflects the existential structure of understanding
itself. The circle is not a logical flaw but an essential structure of meaning-making,
reflecting how humans engage with the world and interpret their experiences: “But
if we see this circle as a vicious one and look out for ways of avoiding it, even if
we just ‘sense’ it as an inevitable imperfection, then the act of understanding has
been misunderstood from the ground up.”36 That is, textual interpretation is a special
case of the fundamental structure of understanding characteristic of the way of being
(Dasein) of the human being. Hans-Georg Gadamer helpfully explicates the issue,
placing it in the context of textual interpretation, as follows:
A person who is trying to understand a text is always projecting. He projects
a meaning for the text as a whole as soon as some initial meaning emerges in
the text. Again, the initial meaning emerges only because he is reading the text
as a whole as soon as some initial meaning emerges in the text with particular
expectations in regard to a certain meaning. Working out this fore-projection,
which is constantly revised in terms of what emerges as he penetrates into the
meaning, is understanding what is there.37

As Gadamer explains, coming to understand a text involves a constant movement


between one’s own preconceptions (fore-meaning) and global expectations (fore-
projection), enacting real-time interpretive revisions on the basis of what one
encounters at each moment.
The hermeneutic circle of understanding exemplifies Dasein’s existential struc-
ture, which is temporal in nature. As Heidegger puts it: “The ‘circle’ in understand-
ing belongs to the structure of meaning, and the latter phenomenon is rooted in the
existential constitution of Dasein—that is, in the understanding which interprets.
An entity for which, as Being-in-the-world, its Being is itself an issue, has, onto-
logically, a circular structure.”38 The circle is constitutive of Dasein’s existence in
the following way: Dasein is always “ahead-of-itself” (future-oriented projection),
“already in” the world (thrownness), and “alongside” entities within the world
(absorption in the present or fallenness). Dasein finds itself “thrown” into a world

36
Heidegger, Being and Time, 194.
37
Hans-Georg Gadamer, Truth and Method, trans. Joel Weinsheimer and Donald G. Marshall (New
York: Continuum, 1989), 267.
38
Heidegger, Being and Time, 195.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 111

it did not choose. Thrownness is a Heideggerian term that describes the way one
comes into the world having already inherited a set of contingent cultural, histori-
cal, and physical circumstances that condition the way one interprets the world and
their place in it. In textual interpretation, one’s biases, preconceptions, and common-
sense reasonings are analogically in the place of thrownness. Past inheritance not-
withstanding, Dasein always already understands itself in terms of what it might
become or achieve. Implicitly or explicitly, Dasein projects itself toward possibilities
in the future in the same way that one anticipates an overall meaning of a text when
making sense of any particular passage. And, for the most part, Dasein makes sense
of itself and the world through the lens of contemporary social norms and moment-
to-moment routines, having the tendency to conform or “fall” into those norms and
routines and succumbing to their fixity.
The hermeneutic circle reflects the distinctive way of being of beings capable of
understanding. The articulated structure of being ahead-of-itself, already-in, and
alongside marks out the way that Dasein exists in the world—together, they describe
Dasein’s being-in-the-world. The three moments (projection, thrownness, fallenness)
in the structure mutually depend on each other in such a way that they are all equally
fundamental (or, what Heidegger would call “equiprimordial”). Heidegger calls the
tripartite structure “care” [Sorge] to describe the way human self-understanding is
always already conditioned by its temporal being: past (thrownness), present (falle-
ness), and future (projection). Care expresses how Dasein’s being is always a matter
of concern to itself. It is through care that an agent experiences herself in the world,
in relation to others, and amidst her own possibilities.

3.2 Existential holism and transformative realization

With Heidegger’s description of the circular care-structure of Dasein now in view,


we are in a position to reevaluate Haugeland’s account of understanding. As Hauge-
land notes, existential holism is “in the reader.” On my reading, it aligns with Hei-
degger’s notion of being-in-the-world and names the fundamental way that humans
exist as “giving a damn” about their own being—who one is—and their place in the
world. Further, Haugeland’s initial three holisms, taken together, have a threefold
temporal character resonant with the circular structure of care. One comes to under-
stand how things make sense in the world as situated amidst the dynamic interplay
of past, present, and future—temporal dimensions that are logically separable for
explanatory ends but equiprimordial in their actuality.
Care for one’s being is what allows entities to become available for considera-
tion as the entities they are and opens the possibility for truth. That is, to mark off
the boundaries of objects in the world, I must pay close attention to the details of
the world. If I really care about getting things right, I cannot simply impose arbi-
trary boundaries—I must rather do my best to let the details speak for themselves
in a way that coheres with everything else. For instance, suppose I am a biologist
studying cell division. In my work I must give equal heed to the intricate details of
my observations and the established norms of my discipline that render intelligible
what counts as actual, possible, and impossible within this field of being. Doing so

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


112 J. Lemelin

matters to who I am in all kinds of ways: it concerns how I see myself in relation to
my projected goals, where I’ve been and where I’m going, my self-esteem, my abil-
ity to forge a life for myself with others, and so on. Caring is what allows for entities’
presence in the first place—and not in a merely individualistic way, for any field of
being (for instance, the field of living things) is governed by intersubjective norms
that themselves have a history. As Haugeland explains in “Truth and Finitude,”
My ability to project those [e.g., biological] entities onto their possibilities is
not merely another possibility onto which I project myself but is rather part of
my ability to project myself onto my own possibilities at all. In other words,
my self-understanding literally incorporates an understanding of the being of
other entities.39

Hence Haugeland’s dictum that the problem with AI is that computers don’t give a
damn: “This concern with ‘who one is’ is at least one issue that plausibly matters for
its own sake. Machines (at present) lack any personality and, hence, any possibility
of personal involvement; so (on these grounds) nothing can really matter to them.”40
Caring articulates the difference between things that make sense and things that
don’t. Without it, there can be no understanding, for understanding is sense-making.
My suggestion is that we interpret existential holism as the co-constituting inte-
gration of the first three holisms, each with its own temporal register redolent of
care’s articulated structure. Intentional interpretation has a present aspect given that
it concerns a fixed framework; common-sense holism denotes an inherited web of
assumptions and norms from the past; and situation holism calls for projection of
future-oriented possibilities. As he proceeds in “Understanding Natural Language,”
Haugeland moves from one to the next, as if each type of holism is emblematic of
a benchmark that AI systems ought to match progressively in order to manifest true
natural language understanding. This implies that AI systems can achieve under-
standing if only each step is taken, criteria are met, and modules are added until
the systems interpret a text holistically across temporal dimensions. What I seek to
show, though, is that separating out the temporal dimensions of hermeneutic under-
standing in this atomistic way just doesn’t make sense.
Adapting terminology from Matthew Boyle, we can say that the progressive-
benchmark approach is additive.41 In our context, an additive approach would posit
that human understanding shares underlying functional structures with machinic
‘understanding’ (i.e., interpretation across past, present, and future) with the proviso
that there is something added on top (e.g., a sense of self) in the human case. On
that reading, existential holism would be the combination of the first three holisms
plus something extra otherwise missing. Haugeland’s argument invites such a read-
ing and it has the additional benefit of being empirically plausible, given that LLM
outputs are often indistinguishable from human responses even in light of LLM’s
apparent lack of selfhood. By contrast, I argue that existential holism is a capacity

39
Haugeland, “Truth and Finitude,” 203.
40
Haugeland, “Understanding,” 56.
41
Boyle, “Additive Theories”.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 113

that is transformative for what it is directed toward, and that this is also Hauge-
land’s considered view.42 Boyle draws the additive/transformative distinction with
the aim of differentiating human and non-human-animal perceptual capacities and
content. Broadly speaking, an additive theorist views human rationality as a capac-
ity imposed atop animal perceptual and desiderative capacities. Against additive
views—echoing John McDowell—Boyle advocates a transformative theory, arguing
that human perceptual capacities are different in kind than those of non-human ani-
mals insofar as rationality transforms the capacities purportedly shared with non-
human animals and the content available to them. The point isn’t to deny that there
are commonalities between humans and non-human animals, but rather to interro-
gate the nature of the commonalities in question: “What the two ‘have in common’,
on this view, is not a separable factor that is present in both, but a generic structure
that is realized in different ways in two cases.”43 In other words, Boyle advocates
the view that perceptual capacities are realized in such divergent ways that they are
actually different capacities altogether. His argument is subtle and complex, and
deserves scrutiny on its own terms. I mention it only in the interest of adapting and
extending his terminology.
The lesson that Boyle draws from differences between human and non-human-
animal perception applies in the case of human understanding and machinic natural
language processing. A standard view deeply ingrained in computational thinking
is that the same functionally-equivalent capacities can be realized in different ways.
This is often captured through an analogy with birds and airplanes: It would be fool-
ish to deny that an airplane flies simply because it is an artifact and does not flap its
wings like its natural counterpart. Rather, birds and airplanes can equally be said to
fly, even if they do so in radically different ways. Now, we can extend this analogy
to textual facility between humans and machines: LLMs appear to be at least mini-
mally competent linguistic agents in ways similar to humans, and recent advances
leave no signs that there are principled limits on their behavioral outputs.
Haugeland’s transformative view, however, denies that multiply-realizable func-
tional equivalents are even equivalent in the first place. That is to say, AI’s capacity
to interpret texts and produce sensible outputs is essentially different than the capac-
ity of understanding characteristic of existential holism. “Giving a damn” is not an
extra factor added on to a prior underlying foundation of capacities. It is a way of
being that qualitatively transforms capacities and the entities that can be present for
them. While Haugeland leaves this claim undeveloped in “Understanding Natural
Language,” he offers resources for its defense in later essays on intentionality and
direct engagements with Heidegger’s Being and Time.44

42
Here’s how Boyle describes transformative theories of rationality: “[Transformative] theories take the
very nature of perceptual and desiderative capacities to be transformed by the presence of rationality, in
a way that makes rational perceiving and rational desiring essentially different from their merely animal
counterparts” (“Additive Theories,” 530–531).
43
Ibid., 531.
44
A more comprehensive treatment of Haugeland’s inquiries into intentionality exceeds the scope of the
present discussion.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


114 J. Lemelin

3.3 Finitude and responsibility

What matters for “giving a damn” is not merely the recognition that one’s sense-
making capacity is temporally holistic, but having the ability to firmly own up to
the fact that one’s being qua temporal is finite. For Haugeland, just as for Heidegger,
owning up to finitude is a matter of taking responsibility. In the context of our pre-
sent discussion, the idea is that any system (natural or artificial) capable of under-
standing must already be able to take responsibility. “The capacity for responsibil-
ity” is just another way of describing “giving a damn,” and such a capacity is the
condition for the possibility of understanding. In one of his few direct, long-form
engagements with Heidegger, Haugeland articulates the point as follows:
Taking responsibility for something is not only taking it as something that
matters but also not taking it for granted. Taking the disclosure of being for
granted—whether explicitly or tacitly—is characteristic of fallen dasein and
normal science [in the Kuhnian sense—JL]. Owned dasein, as taking over
responsibility for its ontological heritage, no longer takes it for granted. It
reawakens the question of being—as its ownmost and sometimes most urgent
question. In other words, it holds itself free for taking it back. That does not
mean it does take it back, still less that it does so easily or casually. The free-
dom to take it back is not a liberty or privilege but rather a burden—the most
onerous of burdens.45
Haugeland is here describing what Heidegger calls anticipatory resoluteness in the
face of being-toward-death. Let’s return to the above example of the research biolo-
gist to get a grip on what this means. To take responsibility as a biologist within the
field of biological being means enacting research practices that meet the standards
of scientific rigor, questioning experimental results when they are incongruous or
don’t replicate, communicating clearly, and so on. What this leaves out, though, is
the issue of not taking things for granted: steadfastly leaving open the possibility
that whole frameworks of ingrained practices and norms within which one situates
themselves might be in need of revision or dismissal. The capacity to “give a damn”
may be what underlies one’s capacity to understand, but too often we take the way
things are for granted without a second glance. To not take things for granted is to
take over responsibility for one’s whole self. Or, as Haugeland puts it succinctly:
“Taking responsibility resolutely means living in a way that explicitly has every-
thing at stake.”46 He calls this living a living way of life.
We now have a variety of ways at our disposal for getting at the same point. Tak-
ing responsibility is holding oneself free for going back on what matters, which is
the same as taking one’s inherited framework back by initiating changes that rad-
ically transform it. Such is the highest expression of giving a damn. Haugeland’s
exemplar of the unwavering, damn-giving individual is a Kuhnian revolutionary sci-
entist who puts everything on the line and is willing to give up on a scientific para-
digm if it proves to be unmanageable. He explains elsewhere:

45
Haugeland, “Truth and Finitude,” 215.
46
Ibid., 216.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Haugeland’s understanding: on artificial intelligence and… 115

Often even enough, the eventual changes are so radical that it makes as much
sense to say that the old discipline died out and got replaced by a successor (or
successors) related only by a pattern of family resemblances to what preceded.
Hence the appeal of grand phrases such as “paradigm shift” and “revolution”.
Yet, however radical and however described, if a discipline just does not work,
pursuing such transformations [i.e., paradigm shifts] (or replacements) can
sometimes be the only responsible response to the actual situation. Accepting
this responsibility is peculiarly personal not merely because it is so risky but
also because what is at stake in it is the individual’s own professional self-
understanding. Who, after all, are you, professionally, if your professional spe-
cialty dies?47

What Haugeland makes clear in this passage is that the kind of death at issue in
being-toward-death is not that of biological perishing, nor is it being socially
uncountable in a population. Instead, it refers to the disintegration of sense-making
norms and ways of living as well as one’s sense of self and the fields of entities inex-
tricably involved therein. Living resolutely means ever holding open the possibility
that one’s entire holistic framework of living, being, and intelligibility must undergo
radical revision or dissolution—and acting accordingly.
The finitude expressive of being-toward-death is not a matter of mere biological
self-preservation, but is rather at the center of what it means to have a world. Hav-
ing a world means being able to comport oneself toward entities as entities. That
is, it is an issue of the way intentional objects are discovered and become avail-
able. Care as way of being is coextensive with being-in-the-world, an expression
less of a detached cognitive faculty than mooded, embedded existence.48 Contempo-
rary LLMs do not understand because entities are not available for them in the first
place, and this accounts for their demonstrated indifference to the stakes of linguistic
activity. LLMs’ constitutive indifference is not a simply theoretical concern, as their
activity can have sometimes gruesome worldly consequences.49 Insofar as LLMs do
not care, they do not have a world ever under threat of dissolution, and such is the
condition of understanding.
If we now return to existential holism in light of the foregoing discussion of finitude
and anticipatory resoluteness, its holistic force as a condition for the possibility of under-
standing does not simply amount to the ability to read oneself into a story or attempt to
grasp a text “from the inside,” as it were. Rather, what’s ultimately at stake is the ability to

47
John Haugeland, “Authentic Intentionality,” in Dasein Disclosed: John Haugeland’s Heidegger, ed.
Joseph Rouse (Cambridge, MA: Harvard University Press, 2013), 271; italics added.
48
The importance of moods and emotions in this context should not be underestimated. For an inquiry
into the role of emotions in Haugeland’s theory of truth, see Bennett Helm, “Truth, Objectivity, and
Emotional Caring: Filling in the Gaps of Haugeland’s Existentialist Ontology,” in Giving a Damn:
Essays in Dialogue with John Haugeland, ed. Zed Adams and Jacob Browning (Cambridge, MA: MIT
Press, 2017), 213–241.
49
See, for instance, this expression of LLMs’ constitutive indifference: Blake Montgomery, “Mother
Says AI Chatbot Led Her Son to Kill Himself in Lawsuit Against Its Maker,” The Guardian, October,
23, 2024, https://​www.​thegu​ardian.​com/​techn​ology/​2024/​oct/​23/​chara​cter-​ai-​chatb​ot-​sewell-​setzer-​death.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


116 J. Lemelin

take responsibility for finite being in time—holding open the possible revision of whole
forms of life from the ground up—and it is for this reason that identifying the temporal
registers of existential holism was crucial from the start. Further, we are also in position
to see why machinic ‘understanding’ is of an altogether different kind than that of the
damn-giving individual. Existential-hermeneutic understanding as exemplified in living
resolutely always leaves open the possibility that the interpreter herself undergo a radical
transformation through her act of interpretation. Textual exegesis presents a specialized
case of a more fundamental, constitutive structure of understanding that Haugeland is tar-
geting: “There is no reason to believe there is a difference in kind between understand-
ing everyday discourse and appreciating literature. Apart from a few highly restricted
domains, like chess playing, analyzing mass spectra, or making airline reservations, the
most ordinary conversations are fraught with life and all its meanings.”50 Without exis-
tential understanding, giving a damn, or caring, there can be no understanding of texts,
for—as Haugeland claims—only a being who cares can read.

4 Concluding remarks

The views put forward above are not meant to diminish the remarkable achievements
of contemporary AI. My aim is rather to examine those achievements for what they
are given the challenges they pose to currently existing ways of making sense of
ourselves and our cognitive behavior. In his own closing remarks to “Understanding
Natural Language,” Haugeland notes that philosophical reflection on AI can be an
illuminating and concrete way of reflecting on our own spiritual nature. One thing
among others that we can learn from the history of AI is the way that it continuously
challenges long-held views and assumptions about how we understand ourselves and
additive ways of defining the human according to this or that characteristic. Should
those ways prove lifeless, giving a damn means giving them up.

Author contributions Joseph Lemelin individually provided all the research and writing in support of this
article.

Data availability No datasets were generated or analysed during the current study.

Declarations
Conflict of interest The authors declare no competing interests.

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.

50
Haugeland, “Understanding,” 59.

Content courtesy of Springer Nature, terms of use apply. Rights reserved.


Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center
GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers
and authorised users (“Users”), for small-scale personal, non-commercial use provided that all
copyright, trade and service marks and other proprietary notices are maintained. By accessing,
sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of
use (“Terms”). For these purposes, Springer Nature considers academic use (by researchers and
students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and
conditions, a relevant site licence or a personal subscription. These Terms will prevail over any
conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription (to
the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of
the Creative Commons license used will apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may
also use these personal data internally within ResearchGate and Springer Nature and as agreed share
it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not otherwise
disclose your personal data outside the ResearchGate or the Springer Nature group of companies
unless we have your permission as detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial
use, it is important to note that Users may not:

1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at

[email protected]

You might also like