Publications

Detailed Information

Neural mechanism of verbal repetition: From sounds to speech

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

유세진

Advisor
이경민
Major
인문대학 협동과정 인지과학전공
Issue Date
2013-02
Publisher
서울대학교 대학원
Keywords
verbal repetitionword learningfMRIfNIRSspeech codesDCM
Description
학위논문 (박사)-- 서울대학교 대학원 : 협동과정 인지과학전공, 2013. 2. 이경민.
Abstract
Verbal repetition is one of simple and natural tasks, in which both ends of speech perception and production are recruited simultaneously, and is supposed to be a fundamental tool of language acquisition, especially in word learning. In the present study, by investigating verbal repetition, we aimed at examining (1) how speech codes are represented in human brain
(2) speech hemodynamics during listening and verbal repetition of various auditory sounds, which supports the first findings, and (3) neural circuits recruited for associating meanings with novel sounds to build speech codes.
In the first experiment, we introduced novel sounds perceived as words or pseudowords depending on the interpretation of ambiguous vowel in the stimuli. With an event-related fMRI, we found the audition-articulation interface at the Sylvian fissures and superior temporal sulci bilaterally, and more importantly, we found neural activities unique to word-perceived repetition in the left posterior middle temporal areas and those unique to pseudoword-perceived repetition in the left inferior frontal gyrus by contrasting word-versus-pseudoword trials. These findings imply that even for acoustically identical sounds, two distinct speech codes, i.e. an articulation-based code of pseudowords and an acoustic-phonetic code of words, are differentially used for verbal repetition according to whether the speech sounds are meaningful or not.
In the second experiment, we re-examined the previous findings in the first experiment with regard to hemodynamics measured by fNIRS. We monitored the hemoglobin concentration change at inferior frontal gyri bilaterally while the subjects listened to various sounds, i.e. natural sounds, animal vocalizations, human emotional sounds, pseudowords, and words, and verbally repeated speech sounds (pseudowords and words) only. We observed oxygenated hemoglobin (O2Hb) change at left inferior frontal gyrus was positive for both speech and nonspeech sounds, but negative at right inferior frontal gyrus. Furthermore, there was hemodynamic modulation by sound types at the IFG even in passive listening. Contrasting verbal repetition of words and pseudowords revealed that the proportion of O2Hb change in total Hb concentration was significantly higher for pseudowords than for words, indicating that articulatory codes at the LIFG were predominant for pseudowords, not for words.
In the third experiment, we further investigated how speech sounds become meaningful, i.e. what neural mechanism supports the learning process. We designed a simple associative learning paradigm combined with fMRI. For the associative learning, some novel sounds were presented with meanings in simple stories (learned condition), while others were presented without meanings in the same stories (unlearned condition). We contrasted verbal repetition of the novel sounds before and after the learning phase. The results revealed that unlearned sounds uniquely evoked neural activities at superior and middle frontal gyri bilaterally, whereas learned sounds uniquely evoked neural activities at superior and inferior parietal lobules as well as superior and middle frontal gyri bilaterally. A connectivity analysis using dynamic causal modeling (DCM) suggested that the dorsal fronto-parietal network might subserve as episodic buffers used for associative learning of novel sounds.
Putting all together, we found that the dorsal fronto-parietal network was recruited for associative learning that novel sounds were transformed into speech sounds, i.e. meaningful sounds. Once sounds become meaningful by learning, an acoustic-phonetic code at left middle temporal gyrus was used to represent the sounds, while meaningless sounds were temporarily maintained as an articulatory code at left inferior frontal gyrus. These findings were additionally confirmed by hemodynamics of speech processing at inferior frontal gyri, indicating that speech perception might be partly dependent on generation of speech codes for speech production.
Language
English
URI
https://hdl.handle.net/10371/121543
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share