Anterior cingulate cortex
Jump to navigation Jump to search
Anterior cingulate cortex
Medial surface of left cerebral hemisphere, with anterior cingulate highlighted
Medial surface of right hemisphere, with Brodmann's areas numbered
Details
Identifiers
Latin Cortex cingularis anterior
NeuroNames 161
NeuroLex ID birnlex_936
Anatomical terms of neuroanatomy
[edit on Wikidata]
The anterior cingulate cortex (ACC) is the frontal part of the cingulate
cortex that resembles a "collar" surrounding the frontal part of the corpus
callosum. It consists of Brodmann areas 24, 32, and 33.
It appears to play a role in a wide variety of autonomic functions, such as
regulating blood pressure and heart rate.[citation needed]
It is also involved in certain higher-level functions, such as attention
allocation,[1] reward anticipation, decision-making,[2] ethics and morality,[3] impulse
control (e.g. performance monitoring and error detection),[4] and emotion.[5]
[6]
Sagittal MRI slice with highlighting indicating location of the anterior
cingulate cortex
Contents
• 1 Anatomy
• 2 Tasks
• 3 Functions
• 3.1 Error detection and conflict monitoring
• 3.2 Social evaluation
• 3.3 Reward-based learning theory
• 3.4 Role in consciousness
• 3.5 Role in registering pain
• 4 Pathology
• 5 Additional images
• 6 See also
• 7 References
• Reward system
Anatomy
Anterior cingulate gyrus of left cerebral hemisphere, shown in red
The anterior cingulate cortex can be divided anatomically based on
cognitive (dorsal), and emotional (ventral) components.[7] The dorsal part
of the ACC is connected with the prefrontal cortex and parietal cortex, as
well as the motor system and the frontal eye fields,[8] making it a central
station for processing top-down and bottom-up stimuli and assigning
appropriate control to other areas in the brain. By contrast, the ventral part
of the ACC is connected with the amygdala, nucleus accumbens,
hypothalamus, hippocampus, and anterior insula, and is involved in
assessing the salience of emotion and motivational information. The ACC
seems to be especially involved when effort is needed to carry out a task,
such as in early learning and problem-solving.[9]
On a cellular level, the ACC is unique in its abundance of specialized
neurons called spindle cells,[10] or von Economo neurons. These cells are
a relatively recent occurrence in evolutionary terms (found only in humans
and other primates, cetaceans, and elephants) and contribute to this brain
region's emphasis on addressing difficult problems, as well as the
pathologies related to the ACC.[11]
Tasks
A typical task that activates the ACC involves eliciting some form of
conflict within the participant that can potentially result in an error. One
such task is called the Eriksen flanker task and consists of an arrow
pointing to the left or right, which is flanked by two distractor arrows
creating either compatible (<<<<<) or incompatible (>><>>) trials.[12]
Another very common conflict-inducing stimulus that activates the ACC is
the Stroop task, which involves naming the color ink of words that are
either congruent (RED written in red) or incongruent (RED written in
blue).[13] Conflict occurs because people’s reading abilities interfere with
their attempt to correctly name the word’s ink color. A variation of this task
is the Counting-Stroop, during which people count either neutral stimuli
(‘dog’ presented four times) or interfering stimuli (‘three’ presented four
times) by pressing a button. Another version of the Stroop task named the
Emotional Counting Stroop is identical to the Counting Stroop test, except
that it also uses segmented or repeated emotional words such as "murder"
during the interference part of the task.
Functions
Many studies attribute specific functions such as error detection,
anticipation of tasks, attention,[13][14] motivation, and modulation of
emotional responses to the ACC.[7][8][15]
Error detection and conflict monitoring
The most basic form of ACC theory states that the ACC is involved with
error detection.[7] Evidence has been derived from studies involving a
Stroop task.[8] However, ACC is also active during correct response, and
this has been shown using a letter task, whereby participants had to
respond to the letter X after an A was presented and ignore all other letter
combinations with some letters more competitive than others.[16] They
found that for more competitive stimuli ACC activation was greater.
A similar theory poses that the ACC’s primary function is the monitoring
of conflict. In Eriksen flanker task, incompatible trials produce the most
conflict and the most activation by the ACC. Upon detection of a conflict,
the ACC then provides cues to other areas in the brain to cope with the
conflicting control systems.
Evidence from electrical studies
Evidence for ACC as having an error detection function comes from
observations of error-related negativity (ERN) uniquely generated within
the ACC upon error occurrences.[7][17][18][19] A distinction has been
made between an ERP following incorrect responses (response ERN) and
a signal after subjects receive feedback after erroneous responses
(feedback ERN).
No-one has clearly demonstrated that the ERN comes from the
ACC[citation needed], but patients with lateral PFC damage do show
reduced ERNs.[20]
Reinforcement learning ERN theory poses that there is a mismatch
between actual response execution and appropriate response execution,
which results in an ERN discharge.[7][18] Furthermore, this theory
predicts that, when the ACC receives conflicting input from control areas
in the brain, it determines and allocates which area should be given control
over the motor system. Varying levels of dopamine are believed to
influence the optimization of this filter system by providing expectations
about the outcomes of an event. The ERN, then, serves as a beacon to
highlight the violation of an expectation.[19] Research on the occurrence
of the feedback ERN shows evidence that this potential has larger
amplitudes when violations of expectancy are large. In other words, if an
event is not likely to happen, the feedback ERN will be larger if no error is
detected. Other studies have examined whether the ERN is elicited by
varying the cost of an error and the evaluation of a response.[18]
In these trials, feedback is given about whether the participant has gained
or lost money after a response. Amplitudes of ERN responses with small
gains and small losses were similar. No ERN was elicited for any losses as
opposed to an ERN for no wins, even though both outcomes are the same.
The finding in this paradigm suggests that monitoring for wins and losses
is based on the relative expected gains and losses. If you get a different
outcome than expected, the ERN will be larger than for expected
outcomes. ERN studies have also localized specific functions of the ACC.
[19]
The rostral ACC seems to be active after an error commission, indicating
an error response function, whereas the dorsal ACC is active after both an
error and feedback, suggesting a more evaluative function (for fMRI
evidence, see also[21][22][23] ). This evaluation is emotional in nature and
highlights the amount of distress associated with a certain error.[7]
Summarizing the evidence found by ERN studies, it appears to be the case
that ACC receives information about a stimulus, selects an appropriate
response, monitors the action, and adapts behavior if there is a violation of
expectancy.[19]
Evidence against error detection and conflict monitoring theory
Studies examining task performance related to error and conflict processes
in patients with ACC damage cast doubt on the necessity of this region for
these functions. The error detection and conflict monitoring theories
cannot explain some evidence obtained by electrical studies[15][18][19]
that demonstrate the effects of giving feedback after responses because the
theory describes the ACC as strictly monitoring conflict, not as having
evaluative properties.
It has been stated that "The cognitive consequences of anterior cingulate
lesions remain rather equivocal, with a number of case reports of intact
general neuropsychological and executive function in the presence of large
anterior dorsal cingulate lesions.[24] For an alternative view of anterior
cingulate, see Rushworth's review (2007).[25]
Social evaluation
Activity in the dorsal anterior cingulate cortex (dACC) has been
implicated in processing both the detection and appraisal of social
processes, including social exclusion. When exposed to repeated personal
social evaluative tasks, nondepressed women showed reduced fMRI
BOLD activation in the dACC on the second exposure, while women with
a history of depression exhibited enhanced BOLD activation. This
differential activity may reflect enhanced rumination about social
evaluation or enhanced arousal associated with repeated social evaluation.
[26]
Reward-based learning theory
A more comprehensive and recent theory describes the ACC as a more
active component and poses that it detects and monitors errors, evaluates
the degree of the error, and then suggests an appropriate form of action to
be implemented by the motor system. Earlier evidence from electrical
studies indicate the ACC has an evaluative component, which is indeed
confirmed by fMRI studies. The dorsal and rostral areas of the ACC both
seem to be affected by rewards and losses associated with errors. During
one study, participants received monetary rewards and losses for correct
and incorrect responses, respectively.[21]
Largest activation in the dACC was shown during loss trials. This stimulus
did not elicit any errors, and, thus, error detection and monitoring theories
cannot fully explain why this ACC activation would occur. The dorsal part
of the ACC seems to play a key role in reward-based decision-making and
learning. The rostral part of the ACC, on the other hand, is believed to be
involved more with affective responses to errors. In an interesting
expansion of the previously described experiment, the effects of rewards
and costs on ACC’s activation during error commission was examined.[23]
Participants performed a version of the Eriksen flanker task using a set of
letters assigned to each response button instead of arrows.
Targets were flanked by either a congruent or an incongruent set of letters.
Using an image of a thumb (up, down, or neutral), participants received
feedback on how much money they gained or lost. The researchers found
greater rostral ACC activation when participants lost money during the
trials. The participants reported being frustrated when making mistakes.
Because the ACC is intricately involved with error detection and affective
responses, it may very well be that this area forms the bases of self-
confidence. Taken together, these findings indicate that both the dorsal and
rostral areas are involved in evaluating the extent of the error and
optimizing subsequent responses. A study confirming this notion explored
the functions of both the dorsal and rostral areas of the ACC involved
using a saccade task.[22]
Participants were shown a cue that indicated whether they had to make
either a pro-saccade or an anti-saccade. An anti-saccade requires
suppression of a distracting cue because the target appears in the opposite
location causing the conflict. Results showed differing activation for the
rostral and dorsal ACC areas. Early correct anti-saccade performance was
associated with rostral activation. The dorsal area, on the other hand, was
activated when errors were committed, but also for correct responses.
Whenever the dorsal area was active, fewer errors were committed
providing more evidence that the ACC is involved with effortful
performance. The second finding showed that, during error trials, the ACC
activated later than for correct responses, clearly indicating a kind of
evaluative function.
Role in consciousness
The ACC area in the brain is associated with many functions that are
correlated with conscious experience. Greater ACC activation levels were
present in more emotionally aware female participants when shown short
‘emotional’ video clips.[27] Better emotional awareness is associated with
improved recognition of emotional cues or targets, which is reflected by
ACC activation.
The idea of awareness being associated with the ACC is supported by
some evidence, in that it seems to be the case that, when subjects'
responses are not congruent with actual responses, a larger error-related
negativity is produced.[19]
One study found an ERN even when subjects were not aware of their error.
[19] Awareness may not be necessary to elicit an ERN, but it could
influence the effect of the amplitude of the feedback ERN. Relating back
to the reward-based learning theory, awareness could modulate expectancy
violations. Increased awareness could result in decreased violations of
expectancies and decreased awareness could achieve the opposite effect.
Further research is needed to completely understand the effects of
awareness on ACC activation.
In The Astonishing Hypothesis, Francis Crick identifies the anterior
cingulate, to be specific the anterior cingulate sulcus, as a likely candidate
for the center of free will in humans. Crick bases this suggestion on scans
of patients with specific lesions that seem to interfere with their sense of
independent will, such as alien hand syndrome.
Role in registering pain
The ACC registers physical pain as shown in functional MRI studies that
showed an increase in signal intensity, typically in the posterior part of
area 24 of the ACC, that was correlated with pain intensity. When this
pain-related activation was accompanied by attention-demanding cognitive
tasks (verbal fluency), the attention-demanding tasks increased signal
intensity in a region of the ACC anterior and/or superior to the pain-related
activation region.[28] The ACC is the cortical area that has been most
frequently linked to the experience of pain.[29] It appears to be involved in
the emotional reaction to pain rather than to the perception of pain itself.
[30]
Evidence from social neuroscience studies have suggested that, in addition
to its role in physical pain, the ACC may also be involved in monitoring
painful social situations as well, such as exclusion or rejection. When
participants felt socially excluded in an fMRI virtual ball throwing game in
which the ball was never thrown to the participant, the ACC showed
activation. Further, this activation was correlated with a self-reported
measure of social distress, indicating that the ACC may be involved in the
detection and monitoring of social situations which may cause
social/emotional pain, rather than just physical pain.[31]
Pathology
Studying the effects of damage to the ACC provides insights into the type
of functions it serves in the intact brain. Behavior that is associated with
lesions in the ACC includes: inability to detect errors, severe difficulty
with resolving stimulus conflict in a Stroop task, emotional instability,
inattention, and akinetic mutism.[32][7][8] There is evidence that damage
to ACC is present in patients with schizophrenia, where studies have
shown patients have difficulty in dealing with conflicting spatial locations
in a Stroop-like task and having abnormal ERNs.[8][18] Participants with
ADHD were found to have reduced activation in the dorsal area of the
ACC when performing the Stroop task.[33] Together, these findings
corroborate results from imaging and electrical studies about the variety of
functions attributed to the ACC.
There is evidence that this area may have a role in obsessive–compulsive
disorder due to the fact that what appears to be an unnaturally low level of
glutamate activity in this region has been observed in patients with the
disorder,[34] in contrast to many other brain regions that are thought to
have excessive glutamate activity in OCD. Recent SDM meta-analyses of
voxel-based morphometry studies comparing people with OCD and
healthy controls has found people with OCD to have increased grey matter
volumes in bilateral lenticular nuclei, extending to the caudate nuclei,
while decreased grey matter volumes in bilateral dorsal medial
frontal/anterior cingulate cortex.[35][36] These findings contrast with
those in people with other anxiety disorders, who evince decreased (rather
than increased) grey matter volumes in bilateral lenticular / caudate nuclei,
while also decreased grey matter volumes in bilateral dorsal medial
frontal / anterior cingulate gyri.[36]
The ACC has been suggested to have possible links with Social Anxiety,
along with the amygdala part of the brain, but this research is still in its
early stages.[37] A more recent study, by the Wake Forest Baptist Medical
Centre, confirms the relationship between the ACC and anxiety regulation,
by revealing mindfulness practice as a mediator for anxiety precisely
through the ACC.[38]
The adjacent subcallosal cingulate gyrus has been implicated in major
depression and research indicates that deep-brain stimulation of the region
could act to alleviate depressive symptoms.[39] Although people suffering
from depression had smaller subgenual ACCs,[40] their ACCs were more
active when adjusted for size. This correlates well with increased
subgenual ACC activity during sadness in healthy people,[41] and
normalization of activity after successful treatment.[42] Of note, the
activity of the subgenual cingulate cortex correlates with individual
differences in negative affect during the baseline resting state; in other
words, the greater the subgenual activity, the greater the negative
affectivity in temperament.[43]
A study of brain MRIs taken on adults that had previously participated in
the Cincinnati Lead Study found that people that had suffered higher levels
of lead exposure as children had decreased brain size as adults. This effect
was most pronounced in the ACC (Cecil et al., 2008)[44] and is thought to
relate to the cognitive and behavioral deficits of affected individuals.
Impairments in the development of the anterior cingulate, together with
impairments in the dorsal medial-frontal cortex, may constitute a neural
substrate for socio-cognitive deficits in autism, such as social orienting and
joint attention.[45]
Additional images
Medial surface of human cerebral cortex - gyri
Anterior Cingulate Cortex of monkey (Macaca mulatta).
See also
Wikimedia Commons has media related to Anterior
cingulate cortex.
• Cingulate cortex
• Cingulate gyrus
• Cingulate sulcus
• subgenual cingulate cortex
• subcallosal cortex
https://en.wikipedia.org/wiki/Language_processing_in_the_brain
Language Processing in the Brain
……………….
Akinetic mutism
Akinetic mutism is a medical term describing patients tending neither to
move (akinesia) nor speak (mutism). Akinetic mutism was first described in
1941 as a mental state where patients lack the ability to move or speak.[1]
However, their eyes may follow their observer or be diverted by sound.[1]
Patients lack most motor functions such as speech, facial expressions, and
gestures, but demonstrate apparent alertness.[2] They exhibit reduced
activity and slowness, and can speak in whispered monosyllables.[1][3] Patients
often show visual fixation on their examiner, move their eyes in response
to an auditory stimulus, or move after often repeated commands.[1][2]
Patients with akinetic mutism are not paralyzed, but lack the will to move.[1]
Many patients describe that as soon as they 'will' or attempt a movement, a
'counter-will' or 'resistance' rises up to meet them.[4]
Contents
• 1 Description
• 1.1 Frontal akinetic mutism
• 1.2 Mesencephalic akinetic mutism
• 2 Symptoms
• 3 Causes
• 3.1 Frontal lobe damage
• 3.2 Thalamic stroke
• 3.3 Ablation of cingulate gyrus
• 3.4 Other
• 4 Diagnosis and treatment
• 4.1 Magnesium sulfate
• 4.2 Cyst puncture
• 4.3 Dopamine agonist therapy
• 5 History
• 6 See also
• 7 References
Description
Frontal akinetic mutism can occur after a frontal lobe injury
The mesencephalic form of akinetic mutism occurs in the midbrain (4)
Akinetic mutism varies across all patients. Its form, intensity, and clinical
features correspond more closely to its functional anatomy rather than to
its pathology. However, akinetic mutism most often appears in two
different forms: frontal and mesencephalic.[2]
Frontal akinetic mutism
Akinetic mutism can occur in the frontal region of the brain and occurs
because of bilateral frontal lobe damage. Akinetic mutism as a result of
frontal lobe damage is clinically characterized as hyperpathic.[5] It occurs
in patients with bilateral circulatory disturbances in the supply area of the
anterior cerebral artery.[2]
Mesencephalic akinetic mutism
Akinetic mutism can also occur as a result of damage to the mesencephalic
region of the brain. Mesencephalic akinetic mutism is clinically
categorized as somnolent or apathetic akinetic mutism.[5] It is
characterized by vertical gaze palsy and ophthalmoplegia. This state of
akinetic mutism varies in intensity, but it is distinguished by drowsiness,
lack of motivation, hyper-somnolence, and reduction in spontaneous
verbal and motor actions.[2][5]
Symptoms
Symptoms of akinetic mutism progress over time.[2] The occurrence of
akinetic mutism takes place approximately four months after the
symptoms first appear.[2]
• Lack of motor function (but not paralysis)[1]
• Lack of speech [1]
• Apathy[6]
• Slowness[6]
• Disinhibition[3]
Causes
Many cases of akinetic mutism have occurred after a thalamic stroke.
Akinetic mutism can be caused by a variety of things. It often occurs after
brain injury or as a symptom of other diseases.
Frontal lobe damage
Akinetic mutism is often the result of severe frontal lobe injury in which
the pattern of inhibitory control is one of increasing passivity and
gradually decreasing speech and motion.
Thalamic stroke
Many cases of akinetic mutism occur after a thalamic stroke.[3] The
thalamus helps regulate consciousness and alertness.
Ablation of cingulate gyrus
Another cause of both akinesia and mutism is ablation of the cingulate
gyrus. Destruction of the cingulate gyrus has been used in the treatment of
psychosis. Such lesions result in akinesia, mutism, apathy, and indifference
to painful stimuli.[7] The anterior cingulate cortex is thought to supply a
"global energizing factor" that stimulates decision making.[8] When the
anterior cingulate cortex is damaged, it can result in akinetic mutism.
Other
Akinetic mutism is a symptom during the final stages of Creutzfeldt–Jakob
disease (a rare degenerative brain disease) and can help diagnose patients
with this disease.[2][9] It can also occur in a stroke that affects both
anterior cerebral artery territories. Another cause is neurotoxicity due to
exposure to certain drugs such as tacrolimus and cyclosporine.
Other causes of akinetic mutism are as follows:
• Respiratory arrest and cerebral hypoxia [6]
• Acute cases of encephalitis lethargica[3]
• Meningitis[3]
• Hydrocephalus[3]
• Trauma[3]
• Tumors[3]
• Aneurysms [3]
• Olfactory groove meningioma
• Cyst in third ventricle [1]
• Toxical lesions and infections of central nervous system [10]
• Delayed post-hypoxic leukoencephalopathy (DPHL) [6]
• Creutzfeldt–Jakob disease (mesencephalic form) [2]
Diagnosis and treatment
Akinetic mutism can be misdiagnosed as depression, delirium, or locked-
in syndrome, all of which are common following a stroke.[3] Patients with
depression can experience apathy, slurring of speech, and body movements
similar to akinetic mutism. Similarly to akinetic mutism, patients with
locked-in syndrome experience paralysis and can only communicate with
their eyes.[3] Correct diagnosis is important to ensure proper treatment. A
variety of treatments for akinetic mutism have been documented, but
treatments vary between patients and cases.
Magnesium sulfate – Epson Salt
Treatments using intravenous magnesium sulfate have shown to reduce the
symptoms of akinetic mutism. In one case, a 59-year-old woman was
administered intravenous magnesium sulfate in an attempt to resolve her
akinetic mutism. The patient was given 500 mg of magnesium every eight
hours, and improvement was seen after 24 hours. She became more verbal
and attentive, and treatment was increased to 1000 mg every eight hours as
conditions continued to improve.[11]
Cyst puncture
As seen in the case of Elsie Nicks, the puncture or removal of a cyst
causing akinetic mutism can relieve symptoms almost immediately.
However, if the cyst fills up again, the symptoms can reappear.[1]
Dopamine agonist therapy
Symptoms of akinetic mutism suggest a possible presynaptic deficit in the
nigrostriatal pathway, which transmits dopamine. Some patients with
akinetic mutism have shown to improve with levodopa or dopamine
agonist therapy,[12] or by repleting dopamine in the motivational circuit
with stimulants, antidepressants, or agonists such as bromocriptine or
amantadine.[6]
Other treatments include amantadine, carbidopa-levodopa, donepezil,
memantine, and oral magnesium oxide.[6][11]
History
Fourteen-year-old Elsie Nicks was the first patient to be diagnosed with
akinetic mutism by Cairns in 1941. She suffered from severe headaches
her entire life and was eventually given morphine to help with treatment.
She began to enter a state of akinetic mutism, experiencing apathy and loss
of speech and motor control. A cyst on her right lateral ventricle was
tapped, and as soon as the needle advanced toward the cyst, she let out a
loud noise and was able to state her name, age, and address. After her cyst
was emptied, she regained her alertness and intelligence, and she had no
recollection of her time spent in the hospital. The cyst was drained two
more times over the next seven months and was eventually removed. After
eight months of rehabilitation, Elsie no longer experienced headaches or
akinetic mutism symptoms.[1]
See also
• Selective mutism
• Locked-in syndrome
• Athymhormic syndrome
• Catatonia
• Aboulia
………………..
Locked-in syndrome
Locked-in syndrome
Cerebromedullospinal disconnection,[1] de-efferented state, pseudocoma,[2] ventral
Synonyms
pontine syndrome
Locked-in syndrome can be caused by stroke at the level of the basilar artery denying blood to the
pons, among other causes.
Specialty Neurology, Psychiatry
Locked-in syndrome (LIS), also known as pseudocoma, is a condition in
which a patient is aware but cannot move or communicate verbally due to
complete paralysis of nearly all voluntary muscles in the body except for
vertical eye movements and blinking. The individual is conscious and
sufficiently intact cognitively to be able to communicate with eye
movements.[3] The EEG is normal in locked-in syndrome. Total locked-in
syndrome, or completely locked-in state (CLIS), is a version of locked-in
syndrome wherein the eyes are paralyzed as well.[4][5] Fred Plum and Jerome
Posner coined the term for this disorder in 1966.[6][7]
Contents
• 1 Signs and symptoms
• 2 Causes
• 3 Diagnosis
• 3.1 Similar conditions
• 4 Treatment
• 5 Prognosis
• 6 Research
• 7 See also
• 8 References
• 9 Further reading
• 10 External links
Signs and symptoms
Locked-in syndrome usually results from quadriplegia and the inability to
speak in otherwise cognitively intact individuals. Those with locked-in
syndrome may be able to communicate with others through coded
messages by blinking or moving their eyes, which are often not affected by
the paralysis. The symptoms are similar to those of sleep paralysis.
Patients who have locked-in syndrome are conscious and aware, with no
loss of cognitive function. They can sometimes retain proprioception and
sensation throughout their bodies. Some patients may have the ability to
move certain facial muscles, and most often some or all of the extraocular
muscles. Individuals with the syndrome lack coordination between
breathing and voice.[8] This prevents them producing voluntary sounds,
though the vocal cords are not paralysed.[8]
Causes
In children, the most common cause is a stroke of the ventral pons.[9]
Unlike persistent vegetative state, in which the upper portions of the brain
are damaged and the lower portions are spared, locked-in syndrome is
caused by damage to specific portions of the lower brain and brainstem,
with no damage to the upper brain.
Possible causes of locked-in syndrome include:
• Poisoning cases – More frequently from a krait bite and other
neurotoxic venoms, as they cannot, usually, cross the blood–brain
barrier
• Brainstem stroke
• Diseases of the circulatory system
• Medication overdose[examples needed]
• Damage to nerve cells, particularly destruction of the myelin sheath,
caused by disease or osmotic demyelination syndrome (formerly
designated central pontine myelinolysis) secondary to excessively
rapid correction of hyponatremia [>1 mEq/L/h])[10]
• A stroke or brain hemorrhage, usually of the basilar artery
• Traumatic brain injury
• Result from lesion of the brain-stem
Curare poisoning mimics a total locked-in syndrome by causing paralysis
of all voluntarily controlled skeletal muscles.[11] The respiratory muscles
are also paralyzed, but the victim can be kept alive by artificial respiration,
such as mouth-to-mouth resuscitation. In a study of 29 army volunteers
who were paralyzed with curare, artificial respiration managed to keep an
oxygen saturation of always above 85%,[12] a level at which there is no
evidence of altered state of consciousness.[13] Spontaneous breathing is
resumed after the end of the duration of action of curare, which is
generally between 30 minutes[14] and eight hours,[15] depending on the
variant of the toxin and dosage.
Diagnosis
Locked-in syndrome can be difficult to diagnose. In a 2002 survey of 44
people with LIS, it took almost 3 months to recognize and diagnose the
condition after it had begun.[16] Locked-in syndrome may mimic loss of
consciousness in patients, or, in the case that respiratory control is lost,
may even resemble death. People are also unable to actuate standard motor
responses such as withdrawal from pain; as a result, testing often requires
making requests of the patient such as blinking or vertical eye movement.
Brain imaging may provide additional indicators of locked-in syndrome, as
brain imaging provides clues as to whether or not brain function has been
lost. Additionally, an EEG can allow the observation of sleep-wake
patterns indicating that the patient is not unconscious but simply unable to
move.[17]
Similar conditions
• Amyotrophic lateral sclerosis
• Bilateral brainstem tumors
• Brain death (of the whole brain or the brain stem or other part)
• Coma (deep and/or irreversible)
• Guillain–Barré syndrome
• Myasthenia gravis
• Poliomyelitis
• Polyneuritis
• Vegetative state (chronic or otherwise)
Treatment
Neither a standard treatment nor a cure is available. Stimulation of muscle
reflexes with electrodes (NMES) has been known to help patients regain
some muscle function. Other courses of treatment are often symptomatic.
[18] Assistive computer interface technologies, such as Dasher combined
with eye tracking, may be used to help people with LIS communicate with
their environment.
Prognosis
It is extremely rare for any significant motor function to return. The
majority of locked-in syndrome patients do not regain motor control.
However, some people with the condition continue to live much longer,
[19][20] while in exceptional cases, like that of Kerry Pink[21] and Kate
Allatt,[22] a full spontaneous recovery may be achieved.
Research
New brain-computer interfaces (BCIs) may provide future remedies. One
effort in 2002 allowed a fully locked-in patient to answer yes-or-no
questions;[23][24] others reported in 2017 having repeated this result with
a larger study.[25] In 2006, researchers created and successfully tested a
neural interface which allowed someone with locked-in syndrome to
operate a web browser.[26] Some scientists have reported that they have
developed a technique that allows locked-in patients to communicate via
sniffing.[27]
See also
• Akinetic mutism
• Lock In, a near-future science fiction novel by John Scalzi
• Martin Pistorius, the author who wrote the autobiographical book
Ghost Boy
• List of people with locked-in syndrome
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4098851/
Wernicke’s Area Revisited: Parallel Streams and Word Processing
Iain DeWitt1 and Josef P. Rauschecker2
Author information Copyright and License information Disclaimer
The publisher's final edited version of this article is available at Brain Lang
See other articles in PMC that cite the published article.
Go to:
Abstract
Auditory word-form recognition was originally proposed by Wernicke to
occur within left superior temporal gyrus (STG), later further specified to
be in posterior STG. To account for clinical observations (specifically
paraphasia), Wernicke proposed his sensory speech center was also
essential for correcting output from frontal speech-motor regions. Recent
work, in contrast, has established a role for anterior STG, part of the
auditory ventral stream, in the recognition of species-specific vocalizations
in nonhuman primates and word-form recognition in humans. Recent work
also suggests monitoring self-produced speech and motor control are
associated with posterior STG, part of the auditory dorsal stream. Working
without quantitative methods or evidence of sensory cortex’ hierarchical
organization, Wernicke co-localized functions that today appear
dissociable. “Wernicke’s area” thus may be better construed as two cortical
modules, an auditory word-form area (AWFA) in the auditory ventral
stream and an “inner speech area” in the auditory dorsal stream.
Keywords: Dual-stream model, word recognition, language
comprehension, pure word deafness, Wernicke's aphasia
Go to:
1. Introduction
The Dual Stream model of auditory cortex, first proposed on the basis of
neurophysiological studies in the macaque monkey (Rauschecker, 1997;
Rauschecker, 1998b; Romanski et al., 1999; Tian et al., 2001), has had a
profound influence on current understanding of language organization in
human cortex (Binder et al., 2000; Hickok & Poeppel, 2000; Scott et al.,
2000). The similarity between single-cell mechanisms of communication-
call processing in monkeys and phoneme identification in humans is
immediately apparent and has led to a hierarchical model of speech
processing in the auditory ventral stream that is now almost universally
accepted (Hickok & Poeppel, 2007; DeWitt & Rauschecker, 2012).
Adoption of the model, however, was not without controversy. Classical
neurology identified posterior superior temporal cortex (ST) as the site of
word recognition (Penfield & Roberts, 1959; Geschwind, 1970), but
results from monkeys showed anterior, not posterior, ST to be most
selective for communication calls (Tian et al., 2001). Posterior ST, on the
other hand, was found to be selective for sound location in monkeys
(Rauschecker & Tian, 2000; Recanzone, 2000; Tian et al., 2001). This
paradox was noted in an early paper on dual-stream concepts in audition
and language:
Speech perception in humans is traditionally associated with the
posterior portion of the [superior temporal] region, often referred
to as “Wernicke’s area.” In rhesus monkeys…neurons in this
region…are highly selective for the spatial location of sounds…
Neurons in the anterior belt regions, on the other hand, are most
selective for [monkey calls] (Rauschecker & Tian, 2000, pp.
11804–11805).
Initially, one could have taken this apparent dissociation between human
and monkey cortex as grounds for dismissing the applicability of the
monkey model to human speech processing (i.e., divergent evolution).
However, the selectivity observed in macaque posterior ST for the location
of sound sources was subsequently also observed in humans by numerous
studies using functional magnetic resonance imaging (fMRI), as well as
electro- and magneto-encephalography (Arnott et al., 2004; Krumbholz et
al., 2005; Tata & Ward, 2005; Zimmer & Macaluso, 2005; Ahveninen et
al., 2006; Deouell et al., 2007). This generally substantiated comparisons
between human and monkey auditory cortex, affirming the role implied by
the monkey data for human anterior ST in word recognition. Still, apparent
conflict between classical neurological models and the monkey work led to
a spectrum of conclusions about the relative involvement of anterior and
posterior ST in word recognition (Binder et al., 2000; Hickok & Poeppel,
2000; Scott et al., 2000; Wise et al., 2001; Price et al., 2003; Thierry,
Giraud & Price, 2003). This enigma has been partially resolved by the
meta-analysis of DeWitt & Rauschecker (2012), which, based on a large
amount of data, clearly associates word-form recognition with anterior ST.
What, if anything, the dorsal stream contributes to language
comprehension is now emerging as a key question. Increasingly, the
computational role of posterior ST in language is understood to pertain to
its role in sensorimotor integration and control (Hickok & Poeppel, 2007;
Rauschecker & Scott, 2009). Recent proposals further emphasize a role for
the dorsal stream in sequence processing and syntax, particularly with
respect to the computation of sentence-internal relations for syntactically
complex sentences (Rauschecker, 2011; Friederici, 2012; Bornkessel-
Schlesewsky & Schlesewsky, 2013).
Here, we present an analysis of speech processing within the dual-stream
architecture of auditory cortex with the aim of clarifying the neural
substrates of auditory word-form recognition. The present work builds on a
previous study from our lab (DeWitt & Rauschecker, 2012). We extend
that work by formal dissection of the roles proposed for Wernicke’s area
and by extensive critical review of results from clinical neuroscience. First,
we consider word-form recognition within the auditory ventral stream (see
2. Word-form recognition and the auditory ventral stream). Emphasis is
given to outstanding questions, particularly with respect to the relationship
between findings from functional imaging (DeWitt & Rauschecker, 2012)
and contemporary quantitative findings from aphasiology and
neurosurgery (see 3. Causal involvement of anterior STG in word
recognition). Next, we perform a historical review of Wernicke’s (1874)
characterization of his sensory speech center, which helps to clarify what
functions should be accounted for in the localization of Wernicke’s area
and evaluation of the Wernicke’s area construct (see 4. A brief history of
Wernicke’s area). The review also highlights some early misconceptions
and oversimplifications about auditory processing that, while reasonable
for the time, continue to color contemporary conceptions of Wernicke’s
area and speech processing. Lastly, we discuss how the disparate functions
Wernicke assigned to his sensory speech center, namely word-form
recognition, supervision of speech production and inner speech, segregate
and embed within the dual-stream model (see 5. Paraphasia, inner speech
and the auditory dorsal stream). Consistent with nonhuman primate
electrophysiology and neuroanatomy, we conclude that word-form
recognition, the principal attribute of Wernicke’s area, should be assigned
to the auditory ventral stream, whereas the regulation of speech production
and inner speech are associated with the auditory dorsal stream.
Go to:
2. Word-form recognition and the auditory ventural stream
Concurrent with early functional imaging, work in nonhuman primate
electrophysiology made breakthroughs into the functional organization of
nonprimary auditory cortex, identifying two main processing pathways: a
dorsal stream optimized for sensorimotor integration, including spatial
processing, and a ventral stream, optimized for object (or pattern)
recognition (see Fig. 1) (Rauschecker, Tian & Hauser, 1995; Rauschecker,
1997; Rauschecker, 1998a, 1998b; Kaas & Hackett, 1999; Romanski et al.,
1999; Kaas & Hackett, 2000; Rauschecker & Tian, 2000; Tian et al., 2001;
Rauschecker & Scott, 2009). This dual-stream organization resembles the
functional organization of visual cortex (Ungerleider & Mishkin, 1982;
Goodale & Milner, 1992; Van Essen & Gallant, 1994) and suggests greater
homologies between the sensory systems than could previously be
assumed.
Fig. 1
A composite illustration of human auditory cortex and macaque
auditory fields
Relative to the macaque, human auditory cortex is rotated ∼45° off the
anterior-posterior axis of the superior temporal plane (Galaburda &
Sanides, 1980; Rademacher et al., 2001; Fullerton & Pandya, 2007;
Hackett, 2011) with primary auditory cortex [core, Brodmann’s area (BA)
41] located along Heschl’s gyrus (HG) and secondary auditory cortex
(lateral and medial belt, BA 42 and 52, respectively) located in planum
polare (PP) and planum temporale (PT). To facilitate comparisons with the
macaque literature, names of functionally-defined macaque subfields are
shown on a flatmap of human anatomy (core: A1, R, RT, RTp; lateral belt:
CL, ML, AL, RTL; medial belt: CM, MM, RM, RTM) (A). Subfield
delineation is estimated from relative field sizes in the macaque, scaled
with respect to the volume of human core (Penhune et al., 1996;
Rademacher et al., 2001) and functionally localized according to tuning
characteristics (Rauschecker et al., 1995; Chevillet et al., 2011). The
composite figure implies a course for the human ventral and dorsal streams
along the superior temporal plane. Fields exhibiting heightened selectivity
for monkey calls are shown in yellow: lateral belt field AL (Tian et al.,
2001; Tsunada, Lee & Cohen, 2011) and area RTp (Kikuchi, Horwitz &
Mishkin, 2010). For orientation, the cortical patch shown in flatmap (A) is
outlined on the cortical surface (dashed line with scissor markers) (B).
Additional points of reference include the circular sulcus (CS), insular
cortex (Ins),
Current perspectives on speech processing have incorporated the dual-
stream architecture of auditory cortex derived from nonhuman primate
work (Binder et al., 2000; Wise et al., 2001; Scott & Wise, 2004; Hickok
& Poeppel, 2007), making it the consensus view (but see Nelken et al.,
2003; Whalen et al., 2006). The precise course of the auditory ventral
stream, however, remained a question of debate: some authors included in
it posterior STS (Wise et al., 2001; Hickok & Poeppel, 2007), a site
consistent with findings from classical neurology (Penfield & Roberts,
1959; Geschwind, 1970); others rejected posterior STS, concluding word-
form recognition occurs in anterior STG (Mesulam, 1998; Binder et al.,
2000; Scott & Wise, 2004). While posterior STS happens to be ventral to
the Sylvian fissure, the ventral and dorsal streams are defined by cortico-
cortical connections originating in the lateral belt areas of auditory cortex
and by histoarchitectonic criteria (Kaas & Hackett, 2000; Rauschecker &
Tian, 2000). These criteria characterize the posterior ST region in humans
as part of the dorsal stream and anterior STG as part of the ventral stream
(see Fig. 2A,B).
Open in a separate window
Fig 2
Anatomical predictions for the site of auditory word-form recognition
(A) In the macaque, communication call processing is strongly associated
with anterior-lateral portions of the superior temporal plane, specifically
areas AL (circled) (adapted from Rauschecker & Tian, 2000). (B) The
putatively homologous human site resides at the anterior-lateral aspect of
Heschl’s gyrus (circled) (adapted from Galaburda & Sanides, 1980). (C)
This site is within the territory originally proposed by Wernicke (shaded
region marked x) (adapted from Wernicke, 1881) but (D) is inconsistent
with the location given for Wernicke’s area by Geschwind (shaded region
marked 4) (adapted from Geschwind, 1969).
In a recent paper, we leveraged the observation of large-scale similarity in
the auditory and visual systems’ functional architectures to address the
problem of localizing auditory word-form recognition within the ventral
stream (DeWitt & Rauschecker, 2012). We reviewed and synthesized
literature bearing on the hypothesis that auditory and visual word
recognition are equivalent problems with similar cortical solutions. This
led us to hypothesize, as have others (Mesulam, 1998; Cohen et al., 2004),
that a cortical region supporting sensory aspects of auditory word
recognition (i.e., an AWFA) should exist with properties comparable to
those identified for the VWFA (Dehaene et al., 2005) and, more generally,
for pattern recognition in the visual ventral stream (Wallis & Rolls, 1997;
Riesenhuber & Poggio, 2002; DiCarlo, Zoccolan & Rust, 2012).
Specifically, this AWFA should demonstrate selectivity for auditory words
(i.e., it should respond more to auditory words than to other sounds).
Further, it should demonstrate invariance to certain acoustical changes
(i.e., its response should be more sensitive to acoustical differences that
affect the phonetic content of utterances than to acoustical differences
which do not).
In our analyses, we assessed selectivity with respect to either acoustically
matched artificial stimuli or non-speech natural stimuli. Invariance was
assessed with respect to adaptation phenomena (Miller, Li & Desimone,
1991), which can be used to probe tolerance for non-category–
transformative physical stimulus deformations and sensitivity to category-
transformative deformations (Grill-Spector & Malach, 2001). Further, the
hierarchical organization of auditory cortex implies increasing
representational complexity along the auditory ventral stream
(Rauschecker et al., 1995; Binder et al., 2000; Kaas & Hackett, 2000;
Rauschecker & Tian, 2000; Rauschecker & Scott, 2009; Chevillet,
Riesenhuber & Rauschecker, 2011) similar to that found along the visual
cortical hierarchy (Hubel & Wiesel, 1962; Riesenhuber & Poggio, 2002).
Therefore, where possible, we assessed processing for phoneme, word, and
phrase stimuli separately. As phoneme recognition is a prerequisite of word
recognition, we hypothesized peak phoneme processing to localize to an
area proximate to primary auditory cortex, relative to the site of peak
processing for words. Phrase processing, in contrast, includes phoneme
and word recognition, but it also strongly engages semantic and syntactic
processing. Accordingly, we hypothesized phrase processing to engage the
sites associated with phoneme and word recognition as well as higher-
order regions of ST. Separate consideration of phoneme, word and phrase
processing, therefore, made the assessment of effects pertaining to word
recognition both more precise and more tractable.
To quantitatively assess our predictions, we focused on results from
functional brain imaging. To systematically evaluate prior results, we used
an anatomically unbiased coordinate-based meta-analytic approach
(Turkeltaub et al., 2002). The method found functional imaging results of
auditory word recognition to be consistent with principles of hierarchical
processing (see Fig. 3) (DeWitt & Rauschecker, 2012). Results supported a
left-biased, three-stage model, with analysis of phonemes occurring in
mid-STG, lateral to Heschl’s gyrus, word recognition occurring in anterior
STG, and phrase processing beginning in anterior STS (c.f., Miglioretti &
Boatman, 2003). This diverged from the classical model of language
organization (Penfield & Roberts, 1959; Geschwind, 1970) and some
contemporary perspectives (Wise et al., 2001; Hickok & Poeppel, 2007),
which locate word recognition in posterior ST.
Open in a separate window
Fig. 3
Meta-analyses of auditory-word processing
Analyses of studies comparing brain response to speech stimuli versus
matched control sounds (A–C), indicative of selectivity for speech sounds,
found a leftward bias and an anterior progression in peak effects with
phoneme-length studies’ peak focus density in left mid-STG (A), word-
length studies’ peak density in left anterior STG (B), and phrase-length
studies’ peak density in left anterior STS (C). Peak density for studies
investigating phonetically specific adaptation (D), indicative of invariant
representation, was found in left mid- to anterior STG. Peak density for
areal specialization studies (E), which compared brain response to speech
stimuli versus other natural non-speech sounds, also indicative of
selectivity for speech sounds, was greatest in left STG. Intensity represents
ALE value. Adapted from DeWitt & Rauschecker (2012).
Prior to the advent of contemporary imaging methods, inference that
posterior ST was the site of auditory word recognition was warranted by
available evidence and methodology. Lesions resulting in auditory
comprehension deficits (as well as poor verbal repetition and paraphasia—
inaccurate word selection during speech; i.e., Wernicke’s aphasia) show
greatest overlap in posterior STG (Robson, Sage & Lambon-Ralph, 2012).
Simple lesion-overlap (density) mapping, however, is spatially biased,
owing to arterial anatomy. For instance, the middle cerebral artery
bifurcates and narrows as it progresses along the Sylvian fissure, likely
increasing the probability of posterior infarcts. Thus, while the center of
mass of most lesions that produce auditory comprehension deficits may be
in posterior STG, this could be an epiphenomenon and comprehension
deficits might be better explained by the anterior extent of these lesions
(c.f., Dronkers et al., 2004). Contemporary methods mitigate spatial bias
through the inclusion of control samples (Bates et al., 2003; Rorden &
Karnath, 2004). These methods utilize variance in symptom severity to
factor out lesion sites that are shared across afflicted individuals but which
do not contribute to task-specific impairments (for additional discussion,
see 3. Causal involvement of anterior STG in word recognition).
In the first decade and a half of functional imaging, as the field and its
methodology matured, reservation in interpretation and deference to well-
established theories was prudent. Increasingly, however, functional
imaging indicated that revisions to the classical model were required
(Mazziotta et al., 1982; Petersen et al., 1988; Wise et al., 1991; Démonet
et al., 1992; Binder et al., 1994; Binder et al., 1996; Binder et al., 1997;
Mummery et al., 1999; Belin et al., 2000; Binder et al., 2000; Scott et al.,
2000; Wise et al., 2001; Belin, Zatorre & Ahad, 2002). As discussed,
results from nonhuman primates were prompting revisions in
understanding of the functional and anatomical organization of auditory
cortex (Kaas & Hackett, 2000; Rauschecker & Tian, 2000; Tian et al.,
2001). These results provided a framework for amendment of models of
speech processing (Mesulam, 1998; Binder et al., 2000; Hickok &
Poeppel, 2000; Wise et al., 2001; Boatman, 2004; Hickok & Poeppel,
2004; Scott & Wise, 2004; Scott, 2005; Hickok & Poeppel, 2007). While
the revised models of speech processing generally adopted a dual-stream
framework, discrepancy persisted about the site of word-form recognition
within the auditory ventral stream. Some authors maintained a site close to
canonical Wernicke’s area (Hickok & Poeppel, 2000; Wise et al., 2001;
Hickok & Poeppel, 2007). Others adopted an anterior STG localization
(Binder et al., 2000; Scott & Johnsrude, 2003; Wise, 2003; Scott & Wise,
2004). Although evidence accumulated on the side of anterior localization,
skepticism remained (Hickok, 2010). Our meta-analysis systematically and
quantitatively weighed two-decades of published findings with bearing on
the site of auditory word-form recognition and concluded the
preponderance of evidence supports anterior STG localization.
In the same time period, work on the visual system’s analogous problem,
visual word-form recognition, progressed more effectively. There, an area
within the visual ventral stream, the eponymously named visual word-form
area (VWFA), has come to be widely regarded and intensively studied as
the crucial site for visual word-form recognition (Cohen et al., 2000;
McCandliss, Cohen & Dehaene, 2003; Dehaene et al., 2005). Although
interpretational questions remain (Baker et al., 2007; Dehaene & Cohen,
2011; Price & Devlin, 2011), localization of the VWFA within ventral
occipitotemporal cortex (VOT) is now largely uncontroversial.
Identification of this VOT site, analogous to macaque infero-temporal
cortex, permitted detailed, mechanistic investigations to proceed,
producing a prolific literature (Cohen et al., 2004; Binder et al., 2006;
Gaillard et al., 2006; Baker et al., 2007; Vinckier et al., 2007; Turkeltaub
et al., 2008; Glezer, Jiang & Riesenhuber, 2009; Dehaene et al., 2010;
Braet, Wagemans & Op de Beeck, 2012; Rauschecker et al., 2012;
Wandell, Rauschecker & Yeatman, 2012). Resolving debate about the
AWFA’s location within ST may similarly position the field to make
advances in unlocking the nature of representation within the auditory
ventral stream.
Go to:
3. Causal involvement of anterior STG in word recognition
Although an unprecedented amount of evidence is now amassed indicating
the involvement of anterior STG in word recognition, there remains a
paucity of direct evidence from neurological and neurosurgical studies to
conclude that anterior STG’s involvement is causal. Some reports provide
compelling evidence in support of causality (Malow et al., 1996;
Hamberger et al., 2001; Hamberger et al., 2003; Miglioretti & Boatman,
2003; Dronkers et al., 2004; Hamberger et al., 2005; Boatman, 2006;
Hamberger et al., 2007; Matsumoto et al., 2011; Rogalski et al., 2011;
Kümmerer et al., 2013). A direct relationship, however, has yet to be
demonstrated between the anatomical location of auditory word-form
recognition (indicated by single-subject brain imaging) and behavioral
impairment resulting from surgical procedures, such as reversible electrical
interference or clinical resection, as has been shown for the VWFA
(Gaillard et al., 2006) and the fusiform face area (Parvizi et al., 2012).
The relative scarcity of causal evidence for anterior STG involvement in
word recognition is partly attributable to the typical reliance of
intraoperative language mapping on outcome measures that assess non-
auditory processing, namely single word reading, visual object naming and
speech arrest (Hamberger et al., 2007; Sanai, Mirzadeh & Berger, 2008).
Those studies that assessed auditory processing typically investigated
acoustic-phonetic feature detection (Boatman, Lesser & Gordon, 1995;
Boatman et al., 1997; Miglioretti & Boatman, 2003; Boatman, 2006) or
sentence comprehension (Malow et al., 1996; Hamberger et al., 2001;
Hamberger et al., 2003; Miglioretti & Boatman, 2003; Hamberger et al.,
2005; Hamberger et al., 2007; Matsumoto et al., 2011). Though important
levels of inquiry, neither level specifically assesses word-form recognition.
The former assesses the stage prior to word-form recognition (i.e.,
phoneme recognition) while the latter assesses phrase comprehension,
which includes semantic and syntactic processing. More refined methods
(Miglioretti & Boatman, 2003; Hickok et al., 2008; Goll et al., 2010;
Rogalski et al., 2011; Bormann & Weiller, 2012; Thothathiri, Kimberg &
Schwartz, 2012) will be required in future investigations for the specific
evaluation of auditory word-form recognition. It should be noted, however,
that resection of sites implicated in auditory sentence comprehension by
electrical interference does increase the incidence of post-operative
impairment in auditory comprehension (Hamberger et al., 2005). Although
the sites resected in that study were not reported in detail, similar studies
report a greater likelihood of impairment on auditory sentence
comprehension from stimulation of anterior ST (Hamberger et al., 2001;
Hamberger et al., 2003; Miglioretti & Boatman, 2003; Hamberger et al.,
2007).
Anterior temporal lobectomies are relatively common. Rarely, however, do
studies report postoperative language decline. This might be attributable to
three factors. First and foremost, the candidate AWFA extends from 45 mm
distal of the temporal pole to 70 mm distal (DeWitt & Rauschecker, 2012).
Standard resections typically remove 35–55 mm of the anterior temporal
lobe, with the majority of resections removing 45 mm or less, sparing
much of the area in question (Hermann, Wyler & Somes, 1991; Schwartz
et al., 1998; Seidenberg et al., 1998; Pataraia et al., 2005; Alpherts et al.,
2008; Helmstaedter et al., 2008; Kho et al., 2008; Bidet-Caulet et al.,
2009; Binder et al., 2011). Further, resections are sometimes performed
differentially, sparing a greater portion of STG relative to the middle and
inferior temporal gyri, also decreasing the likelihood of resections
including the candidate AWFA (Schwartz et al., 1998; Pataraia et al., 2005;
Alpherts et al., 2008; Bidet-Caulet et al., 2009; Binder et al., 2011).
Second, intraoperative language mapping may indicate language function
and, thereby, spare the candidate AWFA from resection. Third, there is a
dearth of reported outcomes at short-term follow-up (t < 6 weeks).
Researchers tend instead to report outcomes for longer recovery durations
(t > 6 months) (Hermann et al., 1991; Schwartz et al., 1998; Davies, Risse
& Gates, 2005; Pataraia et al., 2005; Bidet-Caulet et al., 2009). Given the
relative competency of the non-dominant hemisphere during the
incapacitation of the dominant hemisphere (Hickok et al., 2008),
compensatory plasticity in the contralateral hemisphere could account for a
low incidence of postoperative impairment at long-term follow-up, even
when resections include the candidate AWFA. Interestingly, classical
models have a similar evidentiary problem. There is a dearth of evidence
associating posterior ST resection with auditory comprehension deficits.
Indeed, when studies report posterior ST resection, they often argue they
observe an absence of language decline (Petrovich et al., 2004; Sarubbo et
al., 2012).
Analogous to the temporal lobectomy literature, studies of aphasia lesion
mapping have not traditionally emphasized the role of anterior ST in
auditory word comprehension. Simple density mapping of Wernicke’s
aphasia lesions finds the center of mass of lesions to be in posterior ST, but
the lesions commonly extend into anterior STG (Ogar et al., 2011; Robson
et al., 2012). Similarly, lesion mapping that utilizes both control samples
and continuous symptom severity data implicates both anterior and
posterior ST in auditory sentence comprehension (Saygin et al., 2003;
Dronkers et al., 2004). Further, with respect to comprehension deficits, this
work expressly dissociates posterior STG from surrounding regions:
lesions of posterior STG were not found to affect comprehension.
Importantly, work specifically investigating auditory word recognition (as
opposed to sentential comprehension) exclusively implicates anterior ST
(Rogalski et al., 2011). In patients for whom auditory word recognition is
spared, deficits in auditory sentence comprehension, which can therefore
be attributed to deficits in syntactic processing, are associated with lesions
of posterior ST and inferior parietal lobule (IPL) (Thothathiri et al., 2012).
This result is consistent with the view that anterior ST must be spared for
auditory word recognition to be intact. When either single-word auditory
comprehension is factored out (Fridriksson et al., 2010) or general
auditory comprehension is spared (Buchsbaum, Padmanabhan & Berman,
2011b), word repetition deficits are associated with lesions of posterior ST
and IPL. Again, this is consistent with the view that sensory aphasias that
spare auditory word recognition should spare anterior ST. These results
also dissociate posterior ST lesions with auditory word recognition
deficits. Finally, when considering subcortical lesions, the integrity of
tracts associated with the auditory ventral stream is closely associated with
auditory comprehension, whereas the integrity of tracts associated with the
dorsal stream is associated with vocal repetition (Kümmerer et al., 2013).
While there is a sizable literature associated with pure word deafness (see
7. Appendix A), it is nonetheless a rare condition (Buchman et al., 1986;
Poeppel, 2001). Consequently, there are no group-level lesion-mapping
studies of the disorder (i.e., only case studies). Well-documented cases of
patients with circumscribed cortical lesions are similarly rare. Recent
literature, utilizing modern brain imaging, provides two general
impressions (Praamstra et al., 1991; Engelien et al., 1995; Clarke et al.,
2000; Fung, Sue & Somerville, 2000; Wang et al., 2000; Kaga et al., 2004;
Stefanatos, Gershkoff & Madigan, 2005; Iizuka et al., 2007; Miceli et al.,
2008; Kim et al., 2011; Slevc et al., 2011; Palma et al., 2012; Suh et al.,
2012). First, while bilateral ST lesions are common (Geschwind, 1965;
Buchman et al., 1986; Poeppel, 2001), left hemisphere lesions can be
sufficient (Stefanatos et al., 2005; Slevc et al., 2011; Palma et al., 2012).
Second, while some cases involve lesions of mid- to posterior ST (Kim et
al., 2011; Slevc et al., 2011) and others involve mid- to anterior ST
(Engelien et al., 1995; Stefanatos et al., 2005; Iizuka et al., 2007; Palma et
al., 2012), the commonly affected region appears to be mid STG, lateral to
Heschl’s gyrus—the putative site of phoneme recognition (Boatman et al.,
1995; Boatman et al., 1997; Miglioretti & Boatman, 2003; Liebenthal et
al., 2005; Boatman, 2006; Liebenthal et al., 2010; DeWitt & Rauschecker,
2012). In sum, clinical results are highly suggestive of a causal role for
mid- to anterior STG in word recognition. What remains to be
demonstrated, however, is direct correspondence between results from
fMRI and behavioral impairment following lesion or functional
inactivation.
Go to:
4. A brief history of Wernicke’s area
In the 1860s, Paul Broca established the presence of a motor speech center
in the left inferior frontal gyrus (IFG) (for review, see Dronkers et al.,
2007). Reflecting on Broca’s observations, Carl Wernicke (1874)
postulated a complementary sensory speech center, for the storage and
collection of auditory images (representations) of speech sounds—referred
to today as Wernicke’s area. Initially, Wernicke recognized STG in toto as
the site of auditory imagery and did not attempt to specifically localize
auditory word representations within STG. Rather, he noted only that a
circumscribed region ought to exist somewhere within STG (see Fig. 2C),
analogous to the circumscribed speech-motor region within IFG:
The first temporal gyrus [STG], which is sensory in nature, may
be regarded as the center of acoustic images…[It] may be
regarded as the central terminal of the acoustic nerve, and the
first frontal gyrus [IFG], including Broca’s area, as the central
terminal of the nerves controlling the speech musculature
(Wernicke, 1874/1977, p. 103).
This view is clarified and reiterated in subsequent passages.
As details of the ascending auditory tracts and the hierarchical
organization of auditory cortex were not yet known in 1874, Wernicke
assumed direct innervation of the greater extent of STG by ascending
fibers. His model, therefore, lacked an equivalent to primary auditory
cortex and a theory of representational transformation along auditory
cortex, resulting in large inaccuracies. Wernicke, for reasons supported by
behavioral observations but anatomically flawed, nonetheless posited that
only a portion of STG functions as a sensory speech center:
The area containing acoustic imagery…is not identical to the
broad radiation of the acoustic nerve itself, since complete loss of
acoustic imagery with intact bilateral hearing has been observed
in aphasia…In spite of destruction of the central acoustic
radiation, which carries the sounds of words, perception of noise
and musical tone would still be intact (Wernicke, 1874/1977, p.
105).
Wernicke implies the functional consequence of incomplete
deafferentation of STG is auditory agnosia. Within the context of sensory
aphasia, according to Wernicke, the prominent feature is verbal auditory
agnosia (word deafness). Wernicke describes similar consequences for
cortical lesions:
When…the cortex of the first temporal convolution [STG] is
destroyed, memory for the acoustic images designating…objects
is erased, though memory for concepts may continue existing in
full clarity. This is because the acoustic image of the name for the
concept of an object is generally incidental to the concept,
whereas palpable, tangible imagery is intrinsic (Wernicke, 1874,
p. 22).1
Wernicke is clearly dissociating word-form representation from semantic
representation. He, therefore, principally characterizes his sensory speech
center as an AWFA.
Wernicke subsequently ascribes a secondary function to the sensory speech
center: a corrective role in the activation of motor representations during
speech production:
Apart from lack of understanding, the patient [with sensory
aphasia] has aphasic phenomena in speaking, owing to an
absence of unconscious correction exerted by the speech sound
image (Wernicke, 1874, p. 23).2
The “aphasic phenomena in speaking” to which Wernicke refers are
paraphasias—Kussmaul (1877) had yet to coin the term—which
commonly co-occur with auditory comprehension deficits. At the time, as
is clear from later writings (Wernicke, 1886/1977), all the cases of sensory
aphasia that Wernicke had seen to date included paraphasia. Thus,
Wernicke’s ascription of a corrective role to his sensory speech center
reflects both a clinically motivated desideratum and the assumption that
only a single functional module is lesioned in Wernicke’s aphasia.
Later works include five main addenda (Wernicke, 1886/1977,
1906/1977). First, responding to Kussmaul (1877) and Lichtheim (1885),
Wernicke discussed pure word deafness (see 7. Appendix A), which he
referred to as subcortical sensory aphasia. He attributed pure word
deafness to deafferentation of the sensory speech center. Second, he
developed his notion of the corrective influence (during word selection)
exerted by speech sound imagery on speech motor imagery. Over
development, he argued, the repeated association of auditory and motor
word representations conjoins them into “word-concept” representations
(c.f, Lichtheim, 1885), which form the basis of inner speech (reviewed by
Geva et al., 2011). Wernicke believed the inner-speech faculty is spared in
acquired pure word deafness, explaining the absence of paraphasia. Third,
he expressly localized his sensory speech center to left STG, something
only implied previously. He also, however, allowed that transient aphasia
(recovery) might be explained by plasticity in right STG. Fourth, he
circumscribed the portion of STG posited to contain his sensory speech
center. Citing “numerous pathological findings at hand,” but without
identifying them, he describes the center as being confined to the
“posterior third of half of [sic]” STG (p. 235) and “an adjoining strip” of
medial temporal gyrus (Wernicke, 1906/1977, p. 225). As Wernicke
reproduced and endorsed the anatomical diagrams of Von Monakow and
Déjérine in his section on neuroanatomy (p. 272), “numerous pathological
findings” may have been an allusion to their work. Lastly, his views of the
relevance of his sensory speech center to written comprehension, which
were ambivalent in 1874, evolved (see 8. Appendix B). Wernicke’s
ultimate position was that the sensory speech center was essential for
orthography-to-phonology mapping (i.e., phonological reading) and that
this was attributable to the center’s role in inner speech.
In the century following Wernicke’s observations, Wernicke’s area was
increasingly understood to be limited to the posterior third of STG with
various formulations about which adjacent cortical regions should be
included as well (for reviews, see Bogen & Bogen, 1976; Rauschecker &
Scott, 2009). In the 1960s, Geschwind revived the Wernicke-Lichtheim
model of aphasia (reviewed by Catani & Mesulam, 2008; Eling, 2011),
presenting the most focal interpretation, including only the most posterior
aspect of STG (see Fig. 2D).
Go to:
5. Paraphasia, inner speech and the auditory dorsal stream
Where is Wernicke’s area? Answering this question today—with the
benefit of far grater understanding of neuroanatomy and cortical
processing than either Wernicke or Geschwind had access to—we might
conclude that the functions Wernicke subsumes within a single area are
actually performed by multiple cortical areas (c.f., Goldstein, 1927, 1948;
Mesulam, 1998; Wise et al., 2001). The hypothesis most strongly
supported by available empirical data for the location of Wernicke’s AWFA
is anterior STG (DeWitt & Rauschecker, 2012). This region, however, is
neither a strong candidate site for encoding representations that resemble
Wernicke’s word-concepts (i.e., inner speech) nor for performing the
corrective function Wernicke ascribes to them.
Cortical monitoring of self-produced speech and the correction of speech
motor programs is most parsimoniously viewed as a dorsal-stream
function (Wise et al., 2001; Hickok & Poeppel, 2007; Rauschecker &
Scott, 2009). To coordinate speech production, motor control theory
(Guenther, 1994; Rauschecker & Scott, 2009; Golfinopoulos, Tourville &
Guenther, 2010; Rauschecker, 2011; Hickok, 2012) posits the mapping of
auditory representations of self-produced speech sounds into the frame of
reference of the speech articulators (Cohen & Andersen, 2002; Dhanjal et
al., 2008). Multimodal articulator-encoded speech representations are then
reconciled with expectations, derived from efference copy, of the intended
consequences of the activated motor representation. Finally, the difference
between expectation and feedback (error) is transmitted to frontal cortex
and used in updating motor output. The temporo-parietal sites most
strongly associated with auditory feedback and speech production are
posterior PT, posterior STG, and SMG (Hamberger et al., 2003; Towle et
al., 2008; Golfinopoulos et al., 2010; Takaso et al., 2010; Zheng, Munhall
& Johnsrude, 2010; Golfinopoulos et al., 2011), regions associated with
the auditory dorsal stream.
Accordingly, paraphasia could result form dorsal-stream lesions that
disrupt circuitry involved in rectifying unintended output, hypothetically,
even prior to overt speech production (c.f., Lichtheim, 1885). Consistent
with this, lesion mapping associates paraphasia with posterior ST and IPL
(Buchsbaum et al., 2011a). Wernicke's theory of paraphasia is that word
selection requires integrated auditory-motor representations (word-
concepts), which develop through repeated association during speech
production (c.f., Pulvermüller, 1999; Garagnani, Wennekers &
Pulvermuller, 2007). This is reminiscent of the articulator-encoded speech
representations posited for the dorsal stream. Wernicke viewed word-
concept representation as the basis for inner speech. Localizing inner
speech on the basis of articulatory rehearsal in the phonological loop
(Baddeley, 2003) indicates a posterior ST locus (Buchsbaum et al., 2005).
Similarly, localization based on covert rhyme and homophone judgment
indicates an IPL locus (Geva et al., 2011). Thus, the qualities Wernicke
associated with paraphasia (i.e., word-concepts and inner speech) suggest
dorsal-stream localization. Further, phonological reading, which Wernicke
also associates with his speech center via inner speech, also localizes to
posterior ST and IPL (see 8. Appendix B).
In a dispute with Kussmaul over terminology for what is now called
Wernicke’s aphasia, Wernicke said:
“Word-deafness” describes only one part of that which we see as
an indivisible, unitary picture: for in addition to their word-
deafness, such patients are also always aphasic [paraphasic]
(Wernicke & Friedlander, 1883/1977, p. 171).
Importantly, Wernicke is speaking of sensory aphasia resulting from a
cortical lesion. When Wernicke later acknowledged pure word deafness
(Wernicke, 1886/1977), he referred to it as subcortical sensory aphasia.
Thus, Wernicke never entertained the possibility that there could be
multiple speech centers within ST, each optimized for different functions
—and furthermore that sensory aphasia (i.e., Wernicke’s aphasia) might
result from extensive lesions, disrupting multiple cortical modules (see 8.
Appendix B).
Freud (1891), citing a case in which a meningioma adjacent to STG caused
pure word deafness, concluded the disorder was not due to subcortical
lesion. This, he argued, could be reconciled with cases in which cortical
lesions produced Wernicke’s aphasia through the assumption that pure
word deafness was attributable to “incomplete lesions” of Wernicke’s area.
Goldstein (1927, 1948) recognized a cortical locus for pure word deafness
—though he acknowledged subcortical loci as well—and dissociated
cortical regions specialized for auditory word-form representation and
inner speech. From consideration of historical cases (Henschen, 1918;
Poetzl, 1919), Goldstein (1948) attributed pure word deafness to lesions of
“the middle part of the left first temporal convolution [STG]…a region
close to Heschl’s area” (p. 222). Localization of inner speech, he felt,
could not yet be decided. Nonetheless, he speculated posterior STG and
adjacent areas (i.e., planum temporale, insula and IPL) were involved.
While acknowledging Goldstein’s observations, Geschwind (1970)
rejected Goldstein’s dissociation of auditory word-form recognition and
inner speech. Instead, while Geschwind (1965) correctly maintained mid-
(or anterior) STG was a “major outflow” of primary auditory cortex, he
surmised that its lesion would merely disconnect posterior STG from
primary auditory cortex, a view which lacks support from modern
neuroanatomy and is impoverished with respect to representational
transformation in cortical processing.
Go to:
6. Conclusions
Wernicke originally proposed a site within left STG to subserve auditory
word-form recognition. On the basis of post-mortem case studies, classical
neurology came to understand the location of “Wernicke’s area” to be
within posterior STG (and adjacent areas of cortex). Wernicke posited a
secondary function for his sensory speech center, namely the maintenance
of correct motor output. In contrast, work on speech processing in humans
with functional neuroimaging (consistent with electrophysiological work
on the processing of species-specific vocalizations in nonhuman primates),
has increasingly come to implicate left anterior STG as the site of auditory
word-form recognition. Although causal involvement in word-form
recognition is yet to be specifically demonstrated for this site, quantitative
neurological and neurosurgical investigations support such a role.
Similarly, contemporary understanding of auditory cortex associates
speech-motor control with posterior ST. Wernicke’s area, functionally
defined, therefore appears to consist of two areas: an AWFA in anterior
STG and an “inner-speech area” in posterior STG/IPL. This critical
reappraisal of speech processing in auditory cortex and, specifically, of the
Wernicke’s area construct suggests a new framework for the assessment
and diagnosis of sensory aphasias, as well as new procedures for the intra-
operative mapping of language function.
Highlights
• Studies in monkeys have established a dual-stream model for
auditory cortex.
• Recent work affirms close homologies between human and monkey
cortex.
• Classical “Wernicke’s area” has both dorsal- and ventral-stream
components.
• Anterior STG, part of the ventral stream, supports auditory word-
form recognition.
• Posterior ST/IPL, part of the dorsal stream, support functions of
“inner speech.”
Go to:
Acknowledgements
We thank Anna Seydell-Greenwald for assistance with historical research.
This work was supported by an award from the William Orr Dingwall
Foundation (to I.D.), National Science Foundation Grants BCS-0519127
and OISE-0730255 (to J.P.R.), National Institute on Deafness and Other
Communication Disorders Grant 1RC1DC010720 (to J.P.R.) and National
Institute on Neurological Disorders and Stroke Grant 2R56NS052494 (to
J.P.R.).
Go to:
Appendix A
Further historical notes on pure word deafness
If we are to believe Wernicke (Wernicke & Friedlander, 1883/1977),
Kussmaul (1877) coined the term “word deafness” (“Worttaubheit”). He
used it to describe selective deficits in auditory word comprehension, as
distinct from (generalized) deafness. Contrary to common citation (e.g.,
Auerbach et al., 1982; Coslett, Brashear & Heilman, 1984), Kussmaul
neither used the term “pure word deafness” (“reine Worttaubheit”) nor, as
noted by Wernicke (Wernicke & Friedlander, 1883/1977), described a case
that was uncomplicated by other maladies (e.g., paraphasia). Kussmaul’s
usage, however, implied what came classically to be regarded as pure word
deafness. For instance, he described “word deafness with paraphasia,”
which implies a dissociability of components. Wernicke (1886/1977)
attributed the first case description of pure word deafness (though not
referring to it as such) to Lichtheim (1885), whose writing appeared
subsequent to Kussmaul’s (1877) (for discussion, see Eling, 2011).
Lichtheim variously described the condition as “isolated word deafness”
and “outer commissural word deafness.” “Pure word deafness” was in use
by 1889 when Starr (1889) used it to describe auditory comprehension
deficits unaccompanied by impairments in reading, writing and speaking,
consistent with what was implied by Kussmaul’s usage. Liepmann (1898)
—who provided the first anatomical description of pure word deafness—is
also sometimes cited as coining the term. The chronology and his exact
terminology (“reine Sprachtaubheit”), however, are incorrect.
Initially, Wernicke (Wernicke & Friedlander, 1883/1977) disputed the
existence of pure word deafness, arguing that word deafness (assuming a
lesion of his sensory speech center) was always accompanied by
paraphasia. Consequently, he conjectured lesions prior to his sensory
speech center would cause “primary deafness with no trace of aphasia” (p.
104)—though other remarks suggest auditory agnosia would result from
cortical deafferentation (see 4. A brief history of Wernicke’s area)
(Wernicke, 1874/1977). Subsequent to Lichtheim’s case, Wernicke
(1886/1977) claimed to have “never doubted the theoretical possibility” of
pure word deafness (p. 185). Wernicke’s later works (1886/1977,
1906/1977) also revised the theoretical consequence of lesions prior to his
speech center. He now theorized such lesions could result in pure word
deafness from selective destruction of ascending fibers, hypothetically
affecting only those projections into left temporal cortex that carry the
limited portion of the auditory spectrum over which speech sounds are
conveyed. Owing to Wernicke’s initial hesitation, Kussmaul (1877) and
Lichtheim (1885) may be credited with conception of the disorder, if not
the term itself.
Notably, “pure” has taken a different emphasis in contemporary usage
(Buchman et al., 1986; Polster & Rose, 1998; Poeppel, 2001; Pinard et al.,
2002). Today, pure is often regarded as expressly connoting the sparing of
non-verbal sound comprehension, as opposed to connotation of a lack of
additional aphasic complications. This is chiefly a matter of emphasis but
it carries a subtle distinction. By either usage, patients with pure word
deafness have auditory word comprehension deficits; they do not present
with other language deficits (e.g., paraphasia or alexia); and, they have
residual hearing. Modern usage further distinguishes between residual
hearing that simply involves the ability to detect and discriminate sounds
(auditory agnosia) and residual hearing in which non-verbal sound
comprehension is spared (pure word deafness). Wernicke, Kussmaul and
Lichtheim’s consideration of residual sound processing did not overtly
distinguish between low-level perception and the comprehension of
spectro-temporally complex non-verbal sounds (e.g., environmental
sounds or music). Indeed, Wernicke’s theory of pure word deafness
includes cortical deafness for the speech-related portion of the human
frequency range. Therefore, while current usage is not starkly inconsistent
with classical usage, its emphasis and entailments are a modern
innovation. Under contemporary usage, cases of pure word deafness are
very rare (Buchman et al., 1986; Polster & Rose, 1998). In the present
analysis, we are concerned merely with the classical dissociation.
Go to:
Appendix B
Written comprehension
Wernicke initially describes his sensory speech center to be non-essential
for written comprehension in readers who have attained fluent whole-word
reading:
The individual who has been exposed to minimal training in
reading may comprehend the written word only after it has been
heard. But the educated person…may be able to grasp general
meaning after a glance at the page without awareness of the
individual words…The first case presents symptoms of alexia
apart from his aphasia. The second…reveals intact
comprehension of all written material in striking contrast to his
lack of comprehension of the spoken word (Wernicke, 1874/1977,
pp. 108–109).
Although lacking precision and nuance, Wernicke is clearly differentiating
phonological reading (i.e., “sounding words out”) from whole-word
reading. In the case of the former but not the latter, he posits the need for
acoustic images to intermediate access to meaning.
Subsequent to Grashey (1885), Wernicke (1886/1977) substantially revised
his views on written speech. He now stated that, without qualification,
lesion of the sensory speech center causes both alexia and agraphia.
Wernicke’s final work (1906/1977), however, amended his position again.
He re-acknowledged whole-word reading, both for fluent readers of
alphabetic orthographies as well as for readers of logographic
orthographies. However, he regarded whole-word reading as sufficiently
minor in contribution (relative to phonological reading) to be negligible
and, therefore, dismissed it. Crucially, Wernicke viewed dependency of
written comprehension and of writing upon his sensory speech center to be
a function of inner speech and its role in phonological decoding (i.e.,
orthography-to-phonology mapping).
Contemporary dual-stream theory of reading posits a role for posterior ST
and IPL, part of the auditory dorsal stream, in phonological reading
(Jobard, Crivello & Tzourio-Mazoyer, 2003). Consistent with
developmental transition from predominant reliance on phonological
reading to whole word reading, the engagement of IPL in reading
diminishes with reading fluency (Turkeltaub et al., 2003). Activation in the
region also correlates with measures of phonemic awareness (Turkeltaub
et al., 2003), key to the acquisition of fluent reading (Shaywitz, 1998).
Further, IPL lesions strongly affect phonological reading (Philipose et al.,
2007; Brambati et al., 2009; Wilson et al., 2009; Linkersdörfer et al.,
2012). Similarly, as literacy increases, posterior STS shows greater
engagement during reading (Dehaene et al., 2010). Its lesion is also
associated with deficits in phonological reading (Silani et al., 2005;
Brambati et al., 2009).
With respect to aphasia, deficits in reading comprehension are often
associated with Wernicke’s aphasia (Geschwind, 1970). We, however, are
unaware of any empirical work that has specifically investigated the
likelihood of reading deficits given auditory comprehension deficits and
paraphasia. As case reports show dissociability (Ellis, Miller & Sin, 1983),
reading deficits observed in individuals with Wernicke’s aphasia may
reflect the typically large lesion volume of middle cerebral artery accidents
associated with Wernicke’s aphasia, which frequently involve anterior ST,
posterior ST and IPL (Robson et al., 2012). That is, patients presenting
with both auditory and reading comprehension deficits may have large
lesions, disrupting multiple cortical modules. Thus, it remains unclear
whether lesions disrupting auditory word-form recognition or inner speech
necessarily also disrupt reading comprehension.
In summary, the aspects of reading comprehension Wernicke associated
with his sensory speech center relate to phonological reading via inner
speech. Both phonological reading and inner speech are functions
neuroanatomically associated with the auditory dorsal stream. Though
deficits in auditory and written comprehension are often observed together,
the dissociability of their neural substrates (or aspects of them) and their
precise neuroanatomy requires further study.