Published December 5, 2017 | Version v1
Book chapter Open

Audiovisual speech decreases the number of cognate translations in simultaneous interpreting

  • 1. FTSK Germersheim, Johannes-Gutenberg-Universtität Mainz

Description

A large body of research suggests that audiovisual speech facilitates listening comprehension, especially in adverse conditions like noisy environments or hearing impairment, but previous studies on simultaneous interpreting focusing on the interpreting performance failed to demonstrate the benefit of visual input. One explanation might be that conference interpreters increase their cognitive effort to maintain the quality of their rendering. Hence, the impact of visual input might not directly be visible in the interpretation. In order to elucidate this question, I concentrated on self-monitoring in simultaneous interpreting and analyzed the number of cognate translations in a $2\times2$ factorial design with presence/absence of lip movements and presence/absence of white noise as levels. The results showed an increase of cognate translations when the interpreters worked without visible lip movements, indicating a less effective monitoring in this condition. The findings of this study point out the importance of visual input in simultaneous interpreting and its integration in models of simultaneous interpreting.

Files

11.pdf

Files (235.9 kB)

Name Size Download all
md5:e872005419e0b39b6079b1e08c0cd156
235.9 kB Preview Download