Cognitive-Computing Unit 2
Cognitive-Computing Unit 2
S_B_BIET 1
• Working Memory (WM): A more active form of STM that allows for the
manipulation of information, like in problem-solving or language
comprehension.
• Long-Term Memory (LTM): Stores information for extended periods
(minutes, hours, days, years), which can be further categorized into declarative
(explicit) and non-declarative (implicit) memory.
• Language:
• Lexical Representation: The mental storage of words and their meanings.
• Syntactic Processing: The rules that govern how words are combined to form
sentences.
• Semantic Processing: The understanding of the meaning of words and
sentences.
• Language Production: The process of formulating and expressing language.
• Language Comprehension: The process of understanding and interpreting
language.
Cognitive models of memory and language are frameworks that attempt to explain how
humans process, store, and retrieve information, both for remembering events and
understanding language. These models often draw on concepts from psychology, linguistics,
and neuroscience to provide a comprehensive understanding of these complex cognitive
functions.
Computational models of episodic memory provide tools to better understand the latent
neurocognitive processes underlying retention of information about specific events from one’s
life.
The term episodic memory refers to the ability to recall previously experienced events and to
recognize things as having been encountered previously. Over the past several decades,
research on the neural basis of episodic memory has increasingly come to focus on three
structures:
• The hippocampus supports recall of specific details from previously experienced
• Perirhinal cortex computes a scalar familiarity signal that discriminates between studied and
nonstudied items.
• Prefrontal cortex plays a critical role in memory targeting: In situations where the bottom-up
retrieval cue is not sufficiently specific to trigger activation of memory traces in the medial
temporal lobe, prefrontal cortex acts to flesh out the retrieval cue by actively maintaining
additional information that specifies the to-be-retrieved episode.
In-depth discussion and model-fitting results of four models – the retrieving effectively from
memory (REM) model, the bind cue decide model of episodic memory (BCDMEM), the search
of associative memory (SAM) model, and the temporal context model (TCM) – are provided
to facilitate understanding of these models, as well as similarities and differences between
them. Alternative modeling frameworks, including neural network models, are discussed.
Throughout, the importance of context in models of episodic memory is emphasized,
particularly for free recall tasks.
S_B_BIET 2
A new model of recognition memory is placed within, and introduces, a more elaborate theory
that is being developed to predict the phenomena of explicit and implicit, and episodic and
generic, memory. The recognition model is applied to basic findings, including phenomena that
pose problems for extant models: the list-strength effect (e.g., Ratcliff, Clark, & Shiffrin,1990),
the mirror effect (e.g., Glanzer & Adams, 1990), and the normal-ROC slope effect (e.g.,
Ratcliff, McKoon, & Tindall, 1994). The model assumes storage of separate episodic images
for different words, each image consisting of a vector of feature values. Each image is an
incomplete and error prone copy of the studied vector. For the simplest case, it is possible to
calculate the probability that a test item is "old," and it is assumed that a default "old" response
is given if this probability is greater than .5. It is demonstrated that this model and its more
complete and realistic versions produce excellent qualitative predictions.
Although psycholinguistics as part of the interdisciplinary field of the cognitive sciences shares
the view that cognition should essentially be regarded as computation, the research
methodology in psycholinguistics is primarily oriented at experimental studies with the
computer as a tool for the selection and presentation of appropriate stimulus material, and the
exact measurements of reaction times, etc. Computer modeling itself does not play a major role
in model development. Surveys of psycholinguistic research (e.g., Gernsbacher 1994) dedicate
only a minor part to computer modeling in psycholinguistic research, and the use of computer
models to evaluate models of (sub)tasks of language comprehension or production. This means
that, compared with approximately 100 years of psycholinguistic research and 25 years of
S_B_BIET 3
institutionalized cognitive science, computational psycholinguistics can be glossed as a brand
new area of psycholinguistic research. Computer modeling turns out to be advantageous in
model development for at least two reasons.
First, specifying a model only verbally makes it more difficult to check its completeness and
overall consistency, if the model is appropriately complex. If such a model has been realized
as a computer program and produces the desired input-output effect, it is not just a specification
of relations in the model domain, but also a fully specified and consistent theory of the
investigated cognitive function.
Second, the implemented model may be used to generate hypotheses: the compulsion of being
explicit in every detail while developing the computer model may result in predictions under
conditions which have empirically not been investigated before. These predictions can then be
tested by new experiments that will either support or not support the predictions made by the
program.
1.2 Techniques for Modeling Human Language Processing
In general, a reconstruction of an empirical fact by a model can be done in each of two ways:
a narrow one or a process-oriented one. The former means that the computer model realizes an
input–output relation that is identical to that of human beings. However, whether the processes
underlying this mapping from informational inputs to the desired outputs correspond to the
processes of humans or not is irrelevant. The latter means that the computer system does not
only realize a correct input-output relation, but it also accounts for the internal representations
and operations which will be used and performed accordingly, within the human mind.
Artificial Intelligence typically develops models of the first class (see Artificial Intelligence:
Connectionist and Symbolic Approaches), while computational psycholinguistics strives for
models of the second class. Evaluation criteria for the resulting computer models are the same
as for every model: their descriptive and explanatory adequacy, simplicity, generality, and
falsifiability. An important additional criterion is the possibility to match empirical results to
the system's behavior. Computer models of the same task can also be evaluated in a model-to-
model comparison.
Techniques used in computational psycholinguistics range from all kinds of symbolic
processing mechanisms like graph unification, planning, or deductive reasoning to
connectionist approaches. The advantage of symbolic mechanisms is the achievement of a high
level of abstraction that makes it easier to check the adequacy of the system's behavior. One
disadvantage of symbolic systems is the possible rigidity so that exceptions to the rule require
additional treatments. This robustness problem is problematic for several aspects of human
language processing, because often decisions for the production or recognition of a specific
linguistic form or structure turn out to be highly context-dependent (e.g., the modeling of
typical speech errors).
Connectionist approaches, especially activation spreading, are also quite popular in
computational psycholinguistics. The connectionist paradigm bypasses the robustness problem
and it accounts for learnability. Its disadvantage is the lack of modeling of rule-based structural
relationships that is assumed to be an essential characteristic of many tasks in human language
processing.
S_B_BIET 4