Browse

Functional brain connectivity of audiovisual speech processing using graph filtration
시청각 언어 처리시의 기능적 뇌 연결성:

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors
김희정
Advisor
이동수
Major
인문대학 협동과정 인지과학전공
Issue Date
2012-08
Publisher
서울대학교 대학원
Keywords
functional brain connectivityaudiovisual speechpersistent homology
Description
학위논문 (박사)-- 서울대학교 대학원 : 협동과정 인지과학전공, 2012. 8. 이동수.
Abstract
Several brain regions have been implicated in audiovisual speech integration. However, the functional network that exists between these regions to facilitate multisensory speech integration remains unclear. Previous studies suggest that the superior temporal sulcus/gyrus (STS/STG) is a critical brain area for multisensory integration, but functional connectivity between STS/STG and other regions involved in audiovisual speech processing has not been observed consistently, perhaps in part due to differences in the kinds of tasks or thresholds used across studies. To avoid this issue, the current study used graph filtration in a persistent homology framework to investigate the functional networks among brain areas activated during audiovisual speech processing. The audiovisual speech task in this study included 4 conditions that differed in the sensory modality used to deliver speech: audiovisual speech (AV), auditory speech (A), visual speech (V), and audiovisual non-speech (C). I constructed three speech-specific brain networks that were represented by barcode and minimum spanning tree using single linkage distance and compared functional networks. These results revealed that the audiovisual speech network was focused on connectivity between the posterior STG (pSTG) and inferior frontal region, while the visual speech network was focused on the connectivity between the hippocampal and inferior frontal regions. Prominent functional connectivity was observed between the pSTG and frontal regions during audiovisual speech integration, and functional connectivity between the temporal regions and auditory or visual areas was driven by auditory or visual speech modalities. Additionally, meaningful visual speech without auditory speech seems to depend on the retrieval of previous memories related to speech production. The results of this study suggest that the mechanisms by which the brain uses visual information to improve auditory speech perception may involve fronto-temporal connectivity with motor areas related to speech production, rather than visual sensory areas.
Language
English
URI
https://hdl.handle.net/10371/121542
Files in This Item:
Appears in Collections:
College of Humanities (인문대학)Program in Cognitive Science (협동과정-인지과학전공)Theses (Ph.D. / Sc.D._협동과정-인지과학전공)
  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Browse