Publications

Detailed Information

Neural mechanism of verbal repetition: From sounds to speech

DC Field Value Language
dc.contributor.advisor이경민-
dc.contributor.author유세진-
dc.date.accessioned2017-07-14T00:59:16Z-
dc.date.available2017-07-14T00:59:16Z-
dc.date.issued2013-02-
dc.identifier.other000000008459-
dc.identifier.urihttps://hdl.handle.net/10371/121543-
dc.description학위논문 (박사)-- 서울대학교 대학원 : 협동과정 인지과학전공, 2013. 2. 이경민.-
dc.description.abstractVerbal repetition is one of simple and natural tasks, in which both ends of speech perception and production are recruited simultaneously, and is supposed to be a fundamental tool of language acquisition, especially in word learning. In the present study, by investigating verbal repetition, we aimed at examining (1) how speech codes are represented in human brain-
dc.description.abstract(2) speech hemodynamics during listening and verbal repetition of various auditory sounds, which supports the first findings, and (3) neural circuits recruited for associating meanings with novel sounds to build speech codes.
In the first experiment, we introduced novel sounds perceived as words or pseudowords depending on the interpretation of ambiguous vowel in the stimuli. With an event-related fMRI, we found the audition-articulation interface at the Sylvian fissures and superior temporal sulci bilaterally, and more importantly, we found neural activities unique to word-perceived repetition in the left posterior middle temporal areas and those unique to pseudoword-perceived repetition in the left inferior frontal gyrus by contrasting word-versus-pseudoword trials. These findings imply that even for acoustically identical sounds, two distinct speech codes, i.e. an articulation-based code of pseudowords and an acoustic-phonetic code of words, are differentially used for verbal repetition according to whether the speech sounds are meaningful or not.
In the second experiment, we re-examined the previous findings in the first experiment with regard to hemodynamics measured by fNIRS. We monitored the hemoglobin concentration change at inferior frontal gyri bilaterally while the subjects listened to various sounds, i.e. natural sounds, animal vocalizations, human emotional sounds, pseudowords, and words, and verbally repeated speech sounds (pseudowords and words) only. We observed oxygenated hemoglobin (O2Hb) change at left inferior frontal gyrus was positive for both speech and nonspeech sounds, but negative at right inferior frontal gyrus. Furthermore, there was hemodynamic modulation by sound types at the IFG even in passive listening. Contrasting verbal repetition of words and pseudowords revealed that the proportion of O2Hb change in total Hb concentration was significantly higher for pseudowords than for words, indicating that articulatory codes at the LIFG were predominant for pseudowords, not for words.
In the third experiment, we further investigated how speech sounds become meaningful, i.e. what neural mechanism supports the learning process. We designed a simple associative learning paradigm combined with fMRI. For the associative learning, some novel sounds were presented with meanings in simple stories (learned condition), while others were presented without meanings in the same stories (unlearned condition). We contrasted verbal repetition of the novel sounds before and after the learning phase. The results revealed that unlearned sounds uniquely evoked neural activities at superior and middle frontal gyri bilaterally, whereas learned sounds uniquely evoked neural activities at superior and inferior parietal lobules as well as superior and middle frontal gyri bilaterally. A connectivity analysis using dynamic causal modeling (DCM) suggested that the dorsal fronto-parietal network might subserve as episodic buffers used for associative learning of novel sounds.
Putting all together, we found that the dorsal fronto-parietal network was recruited for associative learning that novel sounds were transformed into speech sounds, i.e. meaningful sounds. Once sounds become meaningful by learning, an acoustic-phonetic code at left middle temporal gyrus was used to represent the sounds, while meaningless sounds were temporarily maintained as an articulatory code at left inferior frontal gyrus. These findings were additionally confirmed by hemodynamics of speech processing at inferior frontal gyri, indicating that speech perception might be partly dependent on generation of speech codes for speech production.
-
dc.description.tableofcontentsCHAPTER 1. THEORETICAL BACKGROUND 1
1. SPEECH PROCESSING IN HUMAN BRAIN 1
2. FROM SOUNDS TO SPEECH 5
3. SPEECH AND VERBAL REPETITION 7
4. PURPOSE AND ORGANIZATION OF THIS STUDY 9

CHAPTER 2. SPEECH REPRESENTATION 11
1. HOW ARE SOUNDS REPRESENTED DURING VERBAL REPETITION? 11
2. EXPERIMENTAL DESIGN 13
1. Subjects and Stimuli 13
2. Experimental Procedure 17
3. Data acquisition and analysis 19
3. RESULTS 23
1. Task-related neural activities 23
2. Word- versus pseudoword-perceived neural activities 26
4. DISCUSSION 30
1. Verbal repetition of ambiguous speech sounds 30
2. Spatiotemporal localization of neural activities and its implications 31
3. Multiple speech codes for vocabulary learning by imitation 35

CHAPTER 3. SPEECH HEMODYNAMICS 40
1. WHAT IS THE HEMODYNAMIC DIFFERENCE BETWEEN SPEECH AND NONSPEECH? 40
2. EXPERIMENTAL DESIGN 42
1. Subjects and Stimuli 42
2. Experimental Procedure 44
3. Data acquisition and analysis 46
3. RESULTS 49
1. Hemodynamic responses at inferior frontal gyri (BA47) 50
2. Verbal repetition of words and pseudowords 54
3. Systolic vs. diastolic pulsation and BOLD changes 55
4. DISCUSSION 59
1. Articulation-based sound perception 60
2. Articulatory representation of speech sounds 62
3. BOLD signal and Systolic vs. Diastolic pulsation 64

CHAPTER 4. ASSOCIATING MEANINGS WITH SOUNDS 67
1. HOW CAN SOUNDS BE ASSOCIATED WITH A SPECIFIC MEANING? 67
2. EXPERIMENTAL DESIGN 69
1. Subjects and Stimuli 70
2. Experimental Procedure 71
3. Data acquisition and analysis 74
3. RESULTS 78
1. Neural activities before and after learning 78
2. Regional correlations between activated loci 83
3. Dynamic causal models of word learning 88
4. DISCUSSION 90
1. Neural circuits mediating associative learning 91
2. Associative learning and episodic buffer 94

CHAPTER 5. GENERAL DISCUSSION 97
1. NEUROANATOMY OF VERBAL REPETITION 97
2. VOCAL IMITATION AND AUDITORY-MOTOR INTERFACE 100
3. NEURAL MECHANISM OF SPEECH SOUNDS LEARNING 102
4. SOUND PERCEPTION AND SENSORIMOTOR INTEGRATION 105
5. RIGHT-LATERALITY IN SPEECH PROCESSING 106

CHAPTER 6. CONCLUSION 108

CHAPTER 7. REFERENCES 111

CHAPTER 8. APPENDIX 132
1. BEHAVIORAL EVALUATION OF REPEATING AMBIGUOUS SPEECH SOUNDS 132
2. RELIABILITY OF SUBJECTS RESPONSES 135
3. RAPID FUNCTIONAL MRI FOR REPETITION TASK 136
4. STIMULI LIST 138
1. Experiment 1: Word-Pseudoword pairs 139
2. Experiment 2: Words and Pseudowords only 139
3. Experiment 3: Pseudowords & Reading passages 140
1. Verbal materials 140
2. Reading materials 140

국문 초록 142
-
dc.formatapplication/pdf-
dc.format.extent2960468 bytes-
dc.format.mediumapplication/pdf-
dc.language.isoen-
dc.publisher서울대학교 대학원-
dc.subjectverbal repetition-
dc.subjectword learning-
dc.subjectfMRI-
dc.subjectfNIRS-
dc.subjectspeech codes-
dc.subjectDCM-
dc.subject.ddc153-
dc.titleNeural mechanism of verbal repetition: From sounds to speech-
dc.typeThesis-
dc.contributor.AlternativeAuthorSejin Yoo-
dc.description.degreeDoctor-
dc.citation.pages144-
dc.contributor.affiliation인문대학 협동과정 인지과학전공-
dc.date.awarded2013-02-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share