![]() ( 2012) further found that this gestural enhancement occurs under difficult listening conditions regardless of whether the challenge is due to external noise or hearing impairment. Participants were able to recall more information when the gestures were present indicating that gestures can aid speech comprehension especially in adverse listening conditions. ( 2010) tested comprehension of audiovisual sentences (with or without gestures) with different signal-to-noise ratios (SNR) and asked participants to type down all the information they understood. Integration of auditory and gestural information has been assessed using, for example, gestures containing information not present in speech (Beattie & Shovelton, 1999 Cocks et al., 2009 Cocks et al., 2018 Kelly et al., 1999) or degraded speech to increase difficulty (Holle et al., 2010 Obermeier et al., 2012). Recent studies have extended these findings by showing that incongruence between speech and a visual cue can be especially detrimental for people with aphasia (Vigliocco et al., 2020) and by demonstrating similar interactions between different channels (hand and mouth) in users of British Sign Language (Perniss et al., 2020). The authors found that individuals made fewer errors for the presentations including weakly incongruent gestures (e.g., saying ‘chop,’ and gesturing ‘cut’), compared to strongly incongruent gestures (e.g., saying ‘chop,’ and gesturing ‘twist’), further suggesting that people make use of all the information available even when the meaning the gestures evoke mismatches the speech. The participants’ task was to decide whether the speech or gesture from a video was related to an action prime seen earlier. ( 2010) presented participants with action primes followed by either congruent, weakly incongruent, or strongly incongruent speech–gesture video presentations. ( 1994) showed participants video clips of a speaker telling a cartoon story accompanied by either matching or mismatching iconic gestures finding that participants considered the information from both types of gestures when asked to recall the story. Iconic gestures are processed automatically, as clearly demonstrated by the fact that listeners attend gestures even when they are misleading (Green et al., 2009 Habets et al., 2010 Kelly et al., 2014 Kelly et al., 2010 McNeill et al., 1994 Willems et al., 2009 Wu & Coulson, 2007). For example, 20% of the utterances in dyadic interactions, in which adults spontaneously talk about a set of known and unknown objects (Vigliocco et al., 2021), contain iconic gestures, whereas only 10% of the produced utterances contain beat gestures. Iconic gestures that imagistically evoke features and properties of concepts (e.g., clenching one’s fist and moving the arm up and down to express a hammering action) are common in face-to-face communication. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. The task was to decide whether the speech from the video matched a previously seen picture. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. ![]() We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |