After our initial recap of the wit and wisdom of Prof. Richard Wise, we learned that:
- Modern tractography has taught us nothing new.1
- "Broca's area and Wernicke's area have expanded like Balkan states." [uh, wouldn't that similie imply fragmentation, not expansion?]
- We should "throw out most of the literature from stroke aphasia."
- fMRI studies of language have "fostered confusion, not understanding."
- Hickok & Poeppel "have a job for life."
fMRI Advantages
- doesn't need radioactive tracers
- better spatial resolution (1-2 mm)
- event-related designs
- can intermix trial types
fMRI Disadvantages
- noisy – bad for auditory studies (although sparse-sampling methods help)
- too claustrophobic for some people
- poor imaging of OFC and ATL (due to susceptibility artifact4, although distortion-correction methods are being developed)
- motion artifact with speaking aloud (although many studies have overcome this)
- quiet – good for auditory studies
- more open and “naturalistic”
- no susceptibility artifact
- speaking aloud not a worry
PET Disadvantages
- radioactive tracers
- worse spatial resolution (4-5 mm)
- brain activity over a 40 sec period
- stuck with blocked designs
(4a) Spitsyna et al. 2006.
The stimuli and contrasts in this experiment were as follows:
Intelligible spoken and written narratives were contrasted with unintelligible auditory (spectrally rotated speech) and visual (false font) stimuli matched closely to the narratives in terms of auditory and visual complexity. Thus, during data analyses, the contrasts of the intelligible with the modality-specific unintelligible conditions were intended to demonstrate processing of speech and written language beyond the cortical areas specialized for early auditory and visual processing.The results of these contrasts? Prominent activations in the ATL for both speech vs. rotated speech and written text vs. false font (greater in the L hemisphere than in the R). Another large cluster of activity was located posteriorly, at the junction of the L temporal, occipital, and parietal (TOP) cortices.
Fig. 1 (Spitsyna et al. 2006). The results of analyses within wavelet space of two contrasts: speech with rotated speech, and written text with false font. The estimated effect sizes, both positive (red–yellow) and negative (blue) results, expressed as Z-scores without a cutoff threshold, are displayed on axial brain slices from ventral to dorsal. The strongest positive effect size for each contrast was observed within the left anterior temporal cortex (arrows).
In his talk, Wise noted the anterior and posterior temporal clusters of activation (illustrated in the schematic below).
Fig. 8 (Spitsyna et al., 2006). An anatomical summary of the results, overlaid on the lateral (top row) and ventral (bottom row) surfaces of the left cerebral hemisphere, showing the convergence of spoken and written language in the left STS (pink shading) and in posterior (the TOP junction) and anterior (lateral TP and anterior FG) cortex (red shading). The auditory streams of processing are shown in blue, and the visual streams are in yellow.
In discussing these findings the authors state:
This anatomical model contrasts sharply with those recently presented by Hickok and Poeppel (2004) and by Catani and ffytche (2005), which ascribe an exclusive role for posterior temporal or inferior parietal cortex in the access to verbal meaning.In addition, there was little (if any) involvement of Broca's area, which conflicts with the results of most fMRI studies [but not, apparently, with a replication of this experiment performed by Wise's group, using sparse-sampling techniques]. Furthermore,
Although previous imaging studies investigating both spoken and written language processing and cross-modal priming study have demonstrated inferior frontal gyrus activation, these studies involved performance of explicit metalinguistic tasks. In contrast, the present study and previous PET studies of implicit language processing do not emphasize a role for the left inferior frontal gyrus in implicit language comprehension.Or as Wise put it, the metalinguistic tasks are "party games, not spontaneous speech" (or implicit comprehension).
(4b) Awad et al. 2006.
I was going to discuss this paper as well, but the caveats the authors put forth in their own Methods section made me abandon all hope. For instance,
Intelligible speech was contrasted with its “matched” baseline condition of an unintelligible auditory stimulus (spectrally rotated speech)...During separate scans, the subjects generated self-referential propositional speech in response to cues (e.g., “tell me what you did last weekend”). The prompts were different from those that had been used to elicit the speech that had been recorded for the speech comprehension scans.. . ....a potential confound is that personal narratives vary in the richness of episodic detail they contain. ... Because the subjects were naive to the prompts they were going to receive during the speech production scans, to prevent previous rehearsal of stories with loss of spontaneity, we had no control over this aspect of the study.An additional confound relates to the observation that subjects whose attention is not held by perception of meaningful stimuli or by performance of an explicit task exhibit a reproducible pattern of activity in midline anterior (prefrontal) and posterior (retrosplenial and posterior cingulate) cortex and in bilateral temporoparietal cortex.OK, enough of that.
To summarize the Wise-guy's final message, the role of ATL regions in semantic memory has been largely overlooked in experiments using neuropsychological and fMRI methodologies, but not in those using PET or intracranial EEG (the latter discussed by Eric Halgren - speaker #2 in the session - in his talk on "ELECTROPHYSIOLOGY OF SEMANTIC RESPONSES IN THE ATL").
Footnotes
1 We must consider the possibility that he might not actually believe some of these statements...
2 With the notable exception of those using radioligands such as [11C] raclopride for dopamine D2/D3 receptors and PIB (Pittsburgh Compound-B) for amyloid plaques in the brains of Alzheimer's patients.
3 Sources include links from the Athinoula A. Martinos Center for Biomedical Imaging explaining MRI and PET, a review paper (Otte & Halsband, 2006), and my own semantic memory.
4 Susceptibility artifact is the signal dropout/distortion that occurs with fMRI, which limits the images that can be obtained from the orbitofrontal cortex and the anterior temporal lobes. In fact, the first speaker in the session, Matthew Lambon Ralph, mentioned
new fMRI studies utilizing a correction for the field inhomogeneities that plague the inferior aspects of the ATL.
Main References
Awad M, Warren JE, Scott SK, Turkheimer FE, Wise RJ. (2007). A Common System for the Comprehension and Production of Narrative Speech. Journal of Neuroscience, 27(43), 11455-11464. DOI: 10.1523/JNEUROSCI.5257-06.2007
Humans devote much time to the exchange of memories within the context of shared general and personal semantic knowledge. Our hypothesis was that functional imaging in normal subjects would demonstrate the convergence of speech comprehension and production on high-order heteromodal and amodal cortical areas implicated in declarative memory functions. Activity independent of speech phase (that is, comprehension and production) was most evident in the left and right lateral anterior temporal cortex. Significant activity was also observed in the posterior cortex, ventral to the angular gyri. The left and right hippocampus and adjacent inferior temporal cortex were active during speech comprehension, compatible with mnemonic encoding of narrative information, but activity was significantly less during the overt memory retrieval associated with speech production. Therefore, although clinical studies suggest that hippocampal function is necessary for the retrieval as well as the encoding of memories, the former appears to depend on much less net synaptic activity. In contrast, the retrosplenial/posterior cingulate cortex and the parahippocampal area, which are closely associated anatomically with the hippocampus, were equally active during both speech comprehension and production. The results demonstrate why a severe and persistent inability both to understand and produce meaningful speech in the absence of an impairment to process linguistic forms is usually only observed after bilateral, and particularly anterior, destruction of the temporal lobes, and emphasize the importance of retrosplenial/posterior cingulate cortex, an area known to be affected early in the course of Alzheimer's disease, in the processing of memories during communication.
Spitsyna, G, Warren JE, Scott SK, Turkheimer FE, Wise RJ. (2006). Converging Language Streams in the Human Temporal Lobe. Journal of Neuroscience, 26(28), 7328-7336. DOI: 10.1523/JNEUROSCI.0559-06.2006
There is general agreement that, after initial processing in unimodal sensory cortex, the processing pathways for spoken and written language converge to access verbal meaning. However, the existing literature provides conflicting accounts of the cortical location of this convergence. Most aphasic stroke studies localize verbal comprehension to posterior temporal and inferior parietal cortex (Wernicke’s area), whereas evidence from focal cortical neurodegenerative syndromes instead implicates anterior temporal cortex. Previous functional imaging studies in normal subjects have failed to reconcile these opposing positions. Using a functional imaging paradigm in normal subjects that used spoken and written narratives and multiple baselines, we demonstrated common activation during implicit comprehension of spoken and written language in inferior and lateral regions of the left anterior temporal cortex and at the junction of temporal, occipital, and parietal cortex. These results indicate that verbal comprehension uses unimodal processing streams that converge in both anterior and posterior heteromodal cortical regions in the left temporal lobe.
Other Refs
Catani M, ffytche DH. (2005). The rises and falls of disconnection syndromes. Brain 128:2224-39.
Hickok G, Poeppel D. (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92:67–99.
Otte A, Halsband U. (2006). Brain imaging tools in neurosciences. J Physiol Paris 99:281-92.
No comments:
Post a Comment