Αρχειοθήκη ιστολογίου

Τρίτη 10 Απριλίου 2018

Neural Networks Supporting Audiovisual Integration for Speech: A Large-Scale Lesion Study

S00109452.gif

Publication date: Available online 10 April 2018
Source:Cortex
Author(s): Gregory Hickok, Corianne Rogalsky, William Matchin, Alexandra Basilakos, Julia Cai, Sara Pillay, Michelle Ferrill, Soren Mickelsen, Steven W. Anderson, Tracy Love, Jeffrey Binder, Julius Fridriksson
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N=100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure—auditory vs. visual capture—can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration.



from #MedicinebyAlexandrosSfakianakis via xlomafota13 on Inoreader https://ift.tt/2GMJRfv
via IFTTT

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου