Translation Challenges in Cross-cultural Survey Projects
Have questions or comments for the authors? Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.
Presentations
Coding and Translation Issues of Open-Ended Questions in a Cross-Cultural Context
Brita Dorer, GESIS
Evi Scholz, GESIS
Cornelia Züll, formerly GESIS
Abstract: When using open-ended questions in surveys, care has to be taken on how to analyse the answers to these questions. In best practice, these answers are coded into a pre-defined coding scheme. In cross-cultural research projects where these answers are provided in different languages, this process becomes even more complex because the differences between languages add another layer of possible effects on the final data quality. A decision has to be taken on whether the answers should first be translated into a common project language (such as English) and then coded in one coding scheme developed in this language – or whether the answers should be coded in their original languages and the codings then analysed jointly across languages. Effects from elements such as translation quality of the answers or coder effects may play a role for the quality of the final data.
This paper presents a small research project in which both approaches were carried out and then compared to each other: In our project, answers to open-ended probing questions provided in Spanish and English were first translated into German and coded in their German translations; in a second scenario, the same answers were coded directly from Spanish and English. In both scenarios the same coding scheme was used. Then the codings from both approaches were compared. The research question was whether one of the two approaches yields better results than the other. We identified three different error sources: missing clarity and missing context in respondents’ answers, translation errors and issues, and coding errors and issues. According to our findings, both approaches result in good coding results and thus we cannot give a clear guidance on which of them yields better data. Nevertheless, we developed recommendations for improving the coding results in such multilingual settings, for both scenarios.
The Wondrous Adaptation Cycle of a PISA2018 Questionnaire Item (and Subsequent Mutations)
Shinoh Lee, cApStAn
Elica Krajceva, cApStAn
Abstract: The OECD Programme for International Student Assessment (PISA) assesses knowledge and skills in 15-yer-old students across the globe, and collects contextual information by means of questionnaires. In order to collect comparable data across participating countries, these questionnaires go through a complex, well-documented adaptation process. An attempt will be made to provide a high-level overview of the life-cycle of a PISA questionnaire item, from translatability assessment to structural adaptation, from ex ante harmonization to double translation, and from content adaptation to translation verification. In this journey, the added value of each step will be described.
The source items drafted by the questionnaire authors and submitted for a translatability assessment to a pool of trained linguists are at the origin of this journey. The translatability assessment is designed for early detection and resolution of potential translation and adaptation hurdles.
The next step is adaptation negotiation, where country- and language-specific adaptations or structural and content-related adaptations are discussed and agreed between questionnaire authors and National Centres.
The item is then translated following the double translation and reconciliation design. A range of quality assurance procedures deployed at this step optimize the overall translation quality and consistency.
Then comes translation verification. The verifier uses various tools and methods to carry out the quality assurance. Salient verifier interventions are labelled ‘Requires follow-up’. Then follows a linguistic final check: the verifier ensures that labelled issues are addressed in a satisfactory way.
The way the metadata is organized in a Questionnaire Adaptation Spreadsheet throughout the translation and adaptation history of a questionnaire item makes it possible to trace back issues that have been identified and how they were resolved. This is, for example, very useful in case of Differential Item Functioning.
Advance Translation at a Large Survey Organization: Navigating the Potential Obstacles
Patricia Goerman, US Census Bureau
Mikelyn Meyers, US Census Bureau
Brita Dorer, GESIS
Elyzabeth Gaumer
Abstract: Advance translation is a method pioneered by the European Social Survey (ESS) (Dorer 2011; Fitzgerald 2015; Dorer 2020). The method is often implemented by starting with a draft source version of a questionnaire and having an initial translation done independently by teams of translators in each language. The translators then review their first draft translation and code each question to indicate whether the source version was difficult to translate for any number of reasons and whether they recommend any changes to the source wording prior to completing a final translation. Subsequently, they use the committee approach to create a single translation within each language by merging the draft translations to produce a consensus version. U.S. Census Bureau researchers recently worked on the design and pretesting of a household survey where the original English source wording and Spanish translation were being revised at the same time that the survey was being translated into four other languages: Russian, Chinese, Bengali, and Haitian-Creole. For typical Census Bureau surveys the translation methodology often implemented is translation, followed by review, and sometimes pretesting via cognitive interviews when resources allow. For this project we had the opportunity to implement the advance translation method for the first time at our agency. As with any first-use, we found unexpected challenges. Some examples were: 1) the need to use a pre-existing translation contract where translators had varying experience translating surveys in particular, 2) an inability to meet in person with the translators due to some of them being outside the U.S., and 3) a lack of in house researchers fluent in all of the languages involved. We addressed this latter issue by recruiting local bi-lingual expert language reviewers (LRs). We held both cross-language translator trainings and cross-language LR meetings, which was helpful in orienting participants to the task of identifying problems with the source questionnaire. Shortcomings in the initial translations occasionally placed LRs in the dual role of translator and reviewer. Combining advance translation with cognitive testing in two languages, English and Spanish, was less effective than anticipated due to time constraints. While time- and resource-intensive, we found advance translation to be an effective method of identifying problems in the source questionnaire and for improving its linguistic and cultural portability. In this talk, we will present lessons learned and insights gained and will seek feedback on how to improve our implementation of the advance translation method in the future.
[learn_press_profile]