MODELS OF MEMORY
The Levels of Processing Model
(LOP)
(Craik & Lockhart, 1972)
MODELS OF MEMORY
The IB Syllabus says:
• Evaluate two models or theories of one
cognitive process with reference to
research studies.
• For this section we will be evaluating TWO MODELS
OF MEMORY these have been developed by
cognitive psychologists to explain how memory works
These are the two main models of memory we will be
studying for this topic:
• Multi-store Model (Atkinson & Shiffrin, 1968)
• Levels of Processing (Craik & Lockhart, 1972)
Principles Demonstrated in research into
Models of Memory:
1. Mental processes can and should be
scientifically investigated.
2. Models of psychological functions
can be proposed.
3. Cognitive processes actively
organize and manipulate information
that we receive - humans are not
passive responders to their
environment. (Soft determinism.)
Levels of Processing (Craik &
Lockhart, 1972)
• This model was proposed as alternative to the
multi-store model. Craik & Lockhart rejected
the idea of separate memory structures put
forward by Atkinson & Shiffrin
– The model places an emphasizes memory
process rather than the structure like the
MSM model.
– The LOP model based on the idea that the
strength of a memory trace is determined by
how the original information was processed
LOP: Shallow & Deep Processing
• The model proposed that there are different levels of
processing that influence how much is remember
• Shallow processing – the 1st stage of processing – e.g.
recognising the stimulus in terms of its physical
appearance or structure – e.g. the shape of the letters a
word is written in
• Acoustic encoding – is also in between structural and
semantic encoding – it deeper than structural processing,
but shallower than semantic processing
• Deep processing – the deepest level of processing
involves encoding the input in terms its meaning
(semantics)
• The model assumes that shallow processing will lead to
weak short term retention and deep processing will enable
long term retention
Levels of processing
Shallow processing Deep processing
Structural Acoustic Semantic
(looks like) (sounds like) (means)
Weak memory trace Strong memory trace
– leading to long term
-leading so short term
retention
retention
Experimental research in Cognitive
Psychology
• Your task is to work in groups to partially replicate one of the following
Research Support:
• Elias & Perfetti (1973)(Dillon Alessandra Zach)
• Hyde and Jenkins (1973)(Krystal Klaire HK)
Refuting Research:
• Tyler et al (1979) (Nikki Micah Audrey)
• Palmere et al. (1983) (Katalyna Christina Louis)
You will gather data from this class, so you need to:
1. Identify the IV & the DV and the design of your study – think about the controls
you will put in place (fill out a key study sheet template – one for the group)
2. Have a Standardized Procedure – a script for how the experiment is going to be
run, and what the experimenters are going to say. – make sure you get informed
consent and do a debriefing)
3. Create the materials needed – i.e. PowerPoint with word lists, numbers
4. collect the data in and analyze the data and report it back to the class
LOP: Maintenance & Elaborative
Rehearsal
• The model also proposed that different ways of rehearsing
also have an influence on how well we remember:
1. Rehearsing material simply by rote repetition is called
maintenance rehearsal and is regarded as shallow
processing.
2. Making links with semantic (meaning) associations is called
elaborative rehearsal and is seen as deep processing
• The assumption of the model is that shallow processing will
give rise to weak short term retention and deep processing will
ensure strong, lasting retention
Elias & Perfetti (1973) experiment on acoustic &
semantic encoding
• AIM: Elias & Perfetti (1973) aimed to investigate
encoding and memory
• PROCEDURE: They gave PPs a number of different
tasks to perform on each word in a list, such as:
• finding another word that rhymes (acoustic task)
• or
• finding a word that means the same or similar
(synonym) (semantic task) to the word on the list.
• The rhyming task involved only acoustic coding and
hence was a shallow level of processing.
• The synonym task involved semantic coding and
hence was a deep level of processing.
• The participants were not told that they
would be asked to recall the words, but
nevertheless they did remember some of the
words when subsequently tested.
• This is called incidental learning as opposed
to intentional or deliberate learning.
• FINDINGS & CONCLUSIONS: The PPs
recalled significantly more words following the
synonym task (semantic) than following the
rhyming task (acoustic)
• suggesting that deeper levels of processing
leads to better recall and thus supporting the
LOP model. EVALUATION: Ecological
Validity/ Experimental research
Hyde and Jenkins (1973) experiment on the effect of the
way in which words are processed on recall
• AIM: To investigate the effects of shallow & deep
processing on recall.
• PROCEDURE: Hyde and Jenkins (1973) presented
auditorily lists of 24 words and asked different groups
of participants to perform one of the following so-
called orienting tasks:
♦rating the words for pleasantness
♦estimating the frequency with which each word is used
in the English language
♦detecting the occurrence of the letters ‘e' and 'g' in
any of the words
♦deciding the part of speech appropriate to each word
(e.g. noun, adjective)
♦deciding whether the words fitted into a particular
sentence frame.
• Rating the words for pleasantness (e.g. is “donkey” a
pleasant word?) (semantic)
• Estimating the frequency with which each word is
used in the English language (e.g. how often does
“donkey” appear in the English language?) (semantic)
• Detecting the occurrence of the letters “e” & “g” in the
list words (e.g. is there an “e” or a “g” in the word
“donkey”?) (structural)
• Deciding the part of speech appropriate to each word
(e.g. is “donkey” a verb, noun or an adjective?)
(structural)
• Deciding whether the words fitted into particular
sentences (e.g. does the word “donkey” fit into the
following sentence > “I went to the doctor and showed
him my ............”) (semantic)
• Five groups of participants performed one of
these tasks, without knowing that they were
going to be asked to recall the words (incidental
learning group)..
• An additional five groups of participants
performed the tasks but they were told that they
should learn the words. (intentional learning
group)
• Finally, there was a control group of participants
who were instructed to learn the words but did
not do the tasks)
• FINDINGS & CONCLUSIONS After testing all
the participants for recall of the original word list
Hyde and Jenkins found that there were
minimal differences in the number of items
correctly recalled between the intentional
learning groups and the incidental learning
groups.
• This finding is predicted by Craik and Lockhart
and supports LOP because they believe that
retention is simply a byproduct of processing
and so intention to learn is unnecessary for
learning to occur.
• In addition, Hyde & Jenkins found that the
pleasantness rating and rating frequency of usage
tasks produced the best recall.
• it was found that recall was significantly better for
words which had been analysed semantically (deep)
(i.e. rated for pleasantness or for frequency) than
words which had been rated more superficially
(shallow - structural) (i.e. detecting 'e' and 'g').
• This is also in line with the LOP model because
semantic analysis is assumed to be a deeper level of
processing than structural (shallow) analysis.
• They claimed that this was because these tasks
involved semantic processing whereas the other tasks
did not.
• one interesting finding was that incidental learners
performed just as well as intentional learners in all
tasks – this suggests that it is the nature of the
processing that determines how much you will
remember rather than intention to learn.
• Bear this in mind when you are revising – the more
processing you perform on the information (e.g.
quizzes, essays, spider diagrams etc.) the more likely
you are to remember it .
• EVALUATION:
• Not totally clear what level of processing is used for
the different tasks.
• Is it really the depth of processing or is it the amount
of effort that people put into processing that
determines recall
• Ecological validity, experimental method/ applicability?
The Criticisms/Limitations of the LOP
Model
• It is usually the case that deeper levels of
processing do lead to better recall.
• However, there is an argument about whether it
is the depth of processing that leads to better
recall or the amount of processing effort that
produces the result – see Tyler et al (1970)
• Also any of the MSM and research that
supports can be used as a counter claim in
evaluation of the LOP – as it fails to recognize
that there are indeed two separate stores of
memory
Tyler et al (1979) experiment on the effect of
cognitive effort on recall
• AIM: Tyler et al (1979) investigated the effects
of cognitive effort on memory
• PROCEDURES: They gave participants two
sets of anagrams to solve - easy ones, such as
DOCTRO or difficult ones such as TREBUT.
• Afterwards, participants were given an
unexpected test for recall of the anagram
• FINDINGS & CONCLUSIONS: Although the
processing level was the same, because participants
were processing on the basis of meaning, participants
remembered more of the difficult anagram words than
the easy ones.
• So Tyler et al concluded that retention is a function of
processing effort, not processing depth.
• KEY EVALUATION POINT : Craik and Lockhart
themselves (1986) have since suggested that factors
such as elaboration and distinctiveness are also
important in determining the rate of retention; this idea
has been supported by research.
• For example, Hunt and Elliott (1980) found that people
recalled words with distinctive sequences of tall and
short letters better than words with less distinctive
arrangements of letters
Palmere et al. (1983) experiment on the effect of
elaboration and recall
• AIM: Palmere et al. (1983) a study of the effects of
elaboration on recall
• PROCEDURE: They made up a 32- paragraph
description of a fictitious African nation.
1. Eight paragraphs consisted of a sentence containing
a main idea, followed by three sentences each
providing an example of the main theme;
2. Eight paragraphs consisted of one main sentence
followed by two supplementary sentences;
3. Eight paragraphs consisted of one main sentence
followed by a single supplementary sentence
4. The remaining eight paragraphs consisted of a single
main sentence with no supplementary information
• FINDINGS & CONCLUSIONS:
• Recall of the main ideas varied as a function of the
amount of elaboration (extra info given).
• Significantly more main ideas were recalled from the
elaborated paragraphs than from the single-sentence
paragraphs.
• This kind of evidence suggests that the effects of
processing on retention are not as simple as first
proposed by the levels of processing model.
• EVALUATION: suggests that elaboration is important
– and Craik & Lockhart (1986) did update their model
to include ‘elaboration & distinctiveness’ as having a
major influence on retention
General evaluative points relating to the
research
• Another problem is that participants typically
spend a longer time processing the deeper or
more difficult tasks.
• So, it could be that the results are partly due to
more time being spent on the material.
• The type of processing, the amount of effort &
the length of time spent on processing tend to
be confounded.
• Deeper processing goes with more effort and
more time, so it is difficult to know which factor
influences the results.
• Associated with the previous point, it is often difficult
with many of the tasks used in levels of processing
studies to be sure what the level of processing actually
is.
• For example, in the study by Hyde & Jenkins
(described above) they assumed that judging a word’s
frequency involved thinking of its meaning, but it is not
altogether clear why this should be so.
• Also, they argued that the task of deciding the part of
speech to which a word belongs is a shallow
processing task - but other researchers claim that the
task involves deep or semantic processing.
• So, a major problem is the lack of any independent
measure of processing depth. How deep is deep?
• A major problem with the LOP is circularity,
• i.e. there is no independent definition of
depth.
• The model predicts that deep processing will
lead to better retention - researchers then
conclude that, because retention is better
after certain orienting tasks, they must, by
definition, involve deep processing
• The model is descriptive rather than
explanatory
• Eysenck (1978) claims
• “In view of the vagueness with which depth is
defined, there is danger of using retention-test
performance to provide information about the
depth of processing and then using the ... depth of
processing to ‘explain’ the retention-test
performance, a self-defeating exercise in
circularity”.
• What he means is that if a person performs well on
a test of recall after performing a particular task
then some researchers will claim that they must
have performed a deep level of processing on the
information in order to remember it - a circular
argument.
• Another objection is that levels of processing
theory does not really explain why deeper levels of
processing is more effective – it is descriptive
rather than explanatory
• Eysenck (1990) claims that it describes rather
than explains what is happening.
• However, recent studies have clarified this
point - it appears that deeper coding produces
better retention because it is more elaborate.
• Elaborative encoding enriches the memory
representation of an item by activating many
aspects of its meaning and linking it into the
pre-existing network of semantic associations.
• Deep level semantic coding tends to be more
elaborated than shallow physical coding and
this is probably why it worked better.
General evaluative
points for LOP model of memory