Here is the CMT Uptime check phrase

Survey Translation in the Digital Age


Have questions or comments for the authors? Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.


Presentations


Translators and Programmers Tasks in Producing Translated Electronic Instruments: who does what?
Alisú Schoua-Glusberg, Research Support Services Inc.


Abstract: Computer assisted data collection is increasingly the delivery approach of choice for survey instruments, whether self- or interviewer-administered. Setting up these instruments for CAPI, CATI, ACASI, or web requires close collaboration between the survey design team and programmers implementing the computer-assisted instruments. Sometimes it is survey design team members that are able to program the instruments, depending on the complexity of the software in use.

Survey studies in the U.S. increasingly require setting up instruments in multiple languages, other than English, most frequently in Spanish. In the specific case of English and Spanish, the translated instrument has some challenges in terms of computer-assisted setup, given grammar mismatches between the two languages. For instance, English adjectives do not specify gender but are indeed gendered in Spanish and therefore require alternative endings. Fills do not work exactly identically across the two languages and need tweaking. Word order varies across the two languages, and that also impacts the setup.

Organizations in charge of producing the computer-assisted versions generally do not have Spanish-speaking programmers. Thus, they often rely on translators to both translate the survey questions and to adapt the programmer code by inserting the Spanish text (replacing the English) in a way will deliver an administrable Spanish version. This approach puts translators in a quasi-programmer role for which they have no particular skill or training.

In this presentation we will provide examples of the difficulties this entails and the issues translators face. We intend to make the case for providing adequate training to translators, to end up with a more finished product that involves the least additional processing for both programmers and translators. 


Translation Research Meets Survey Methodology: Transferring Principles from Software Localization to Technical Instrument Design and Translation
Dorothée Behr, GESIS – Leibniz Institute for the Social Sciences


Abstract: A lot of knowledge on questionnaire translation has been accumulated over the past decades. This concerns translation and assessment procedures, translation criteria, and translation-oriented questionnaire development (e. g. through advance translation or translatability assessments). In these evolutions, the increased interplay between translation and survey technology has not yet featured prominently – at least not in talks or publications. This talk seeks to fill this gap. On the one hand, good practice from software localization shall be presented, which is the field within translation studies that focuses on the development of multilingual software. On the other hand, I aim to transfer this knowledge to the survey research field, showing to what extent it can be applied to multilingual computerized surveys. The ultimate goal is to understand translation as a step that is firmly interwoven with all aspects of technical and substantive questionnaire development.


Survey Translation 4.0
Veronika Keck, GESIS – Leibniz Institute for the Social Sciences
Dorothée Behr, GESIS – Leibniz Institute for the Social Sciences
Brita Dorer, GESIS – Leibniz Institute for the Social Sciences


Abstract: In recent years, machine translation has evolved from rule-based and statistical to neuronal translation engines based on deep learning. It seems that any translation can now be provided quickly, effortlessly, and in apparently good quality without any human intervention. As part of the EU-funded SSHOC project, machine translation was integrated into the workflow of the Harkness TRAPD model (double translation & team review) to explore its potential and use for survey research:  In four team set-ups (2 x English-Russian/2 x English-German), one of the initial translations was replaced by machine translation and post-editing, i.e., the revision of machine translated text. The author will zoom into the translation step (‘T’) of the TRAPD model to measure the usability of machine translation in the context of questionnaire translation, since usability is one of the key factors for increasing the adoption of machine translation. Three dimensions of usability, i.e. effectiveness, efficiency, and satisfaction, will be analysed in this regard, taking up a categorisation of the ISO 9241 usability standards. Effectiveness will be measured by analysis of the errors produced by a machine engine. Efficiency will be analysed by comparison of effort needed to produce a text either from a translator or a post-editor perspective. Satisfaction will be captured by a post-task questionnaire. The presentation focuses on work in progress for the analysis of three usability dimensions from a user-centered perspective.


Questionnaire Translation and Review: Life Outside Excel Without Loss of Information. Using LQE in the CAT Tool to Document Linguists’ Interventions
Manuel Souto Pico, cApStAn
Steve Dept, cApStAn
Briac Pilpré, core OmegaT dev


Abstract: To date, the most widespread approach to language tasks (translation, verification, etc.) in International Large Scale Assessments (ILSA) typically involves a translation tool, where linguists enter or edit the target-language version, and a monitoring tool, e.g. an Excel file, where they check translation and adaptation (T&A) guidelines or document their work.

This setup presents problems arising from Excel as a tool and from the structure of the Excel files: (i) the monitoring tools tend to be unwieldy, with too many columns, and require a large screen and/or a lot of scrolling; (ii) having to cycle back and forth between windows poses a distraction; and (iii) there is no connection between the actual translation units and related documentation.

A technical task force at cApStAn is exploring alternatives that would allow users to receive instructions and document their work directly in the translation project within their translation environment. The approach presented here makes it possible to perform both tasks in OmegaT. This is an open-source/free-software translation tool that we use in a number of ILSA projects—with modestly successful results—and that allows for relatively easy customization.

Our proposal includes:    

  • T&A guidelines included directly in the OmegaT project, so that guidelines are displayed when the user visits each segment.
  • A dialog in OmegaT to perform linguistic quality evaluation (LQE), where linguists can log intervention categories, comments, severity codes, etc. directly in the translation environment and keep them linked to the source text, the original translation and the edited version. This allows for richer exploitation of the linguists’ work, introducing the possibility to produce instant diff reports, calculate edit distances, etc.
  • Different approaches to store, display and update the monitoring tool, from an Excel file to a database-backed web interface.