Innovative Methods and Technology
Have questions or comments for the authors? Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.
Interviewers’ Training: Do E-Learning Platforms Make the Grade?
Magali Rheault, Gallup
Zacc Ritter, Gallup
Julie Zeplin, Gallup
Abstract: Standardization and consistency are two fundamental characteristics of field staff training in multinational, multiregional and multicultural surveys. Successful fieldwork depends on mastery of the methodology and the survey instrument. A robust E-learning platform must not only impart this knowledge to interviewers and supervisors, but it must also include systems to evaluate their understanding of technical concepts and enable the team to practice.
With field teams across more than 140 countries, the Gallup World Poll (GWP) presents a unique training challenge. While Gallup provides trainers with a standardized survey operations manual, the GWP’s scale offers an opportunity to explore technology solutions that improve the delivery of standardized content. In 2018, the GWP research team built two online learning platforms (one for CAPI field teams and the other for CATI teams) to test whether E-learning was a viable training tool. Both platforms were translated into a combined total of six languages and pilot-tested successfully in 13 countries in 2019.
With the COVID-19 pandemic, use of the CATI E-learning platform was scaled to 32 countries as in-person interviewers’ training was either prohibited or limited in scope. The rapid scaling of our training tool gave us the opportunity to test how well E-learning could be incorporated into the traditional teaching curriculum. This paper shares results from this large-scale testing and explores solutions to build even more robust E-learning tools. We delve into the challenges of building E-learning platforms for use in different countries, cultures and languages as well as share operational learnings.
Building a New Probabilistic Panel in France, Germany and Greece Using a Mixed Mode Methodology
Jaime Burnett, Kantar
Abstract: In recent years, conducting surveys online has become an increasingly used and credible alternative to classical approaches, such as face-to-face or telephone surveys. But despite all the effort put into maximizing representativeness, they are not considered probabilistic survey methods. Bearing in mind that declining response rates and increasing costs may at times represent barriers to employing traditional probabilistic modes, Kantar has been exploring ways to build probabilistic panels in Europe. The purpose of this paper is to describe a small-scale test for building a new probabilistic panel in France, Germany and Greece using a mixed mode methodology. In all three countries we make use of frames that are widely available. In France we test the suitability of a push to web recruitment methodology and alongside that we test other innovative methods for probabilistic recruitment utilizing the infrastructure of the French post, whilst in Germany and Greece we assess a phone to web design. For Greece we look at a bespoke recruitment survey design whilst in Germany we make use of our omnibus telephone survey as a piggyback for recruitment to the panel. We propose to test a number of different sample design criteria that is likely to impact on response rates to the initial recruitment and subsequent online panel surveys. By way of example we will look at whether different incentives amounts as well as reminder strategies have a positive impact on joining a panel. This work will take place in early 2020, so we expect to be able to provide the results in time for the conference. This paper will add to the evidence base for what works when building survey panels with a probabilistic sample base. In particular, the use of a different recruitment strategies is a novel feature.
2020 Census Language Support Program: Overview and Findings from the Most Robust Program Built by the U.S. Census Bureau
Lily Kapaku, U.S. Census Bureau
Abstract: The goal of the U.S. Census is to count everyone “once, only once, and in the right place”. This includes the counting of limited-English-speaking (LES) households. In order to reach this goal and to provide LES households with an opportunity to respond immediately and in more languages than ever before, the U.S. Census Bureau developed the language support program, which is a significant expansion of the 2010 Census efforts, and is the most robust language program the agency has ever built. For the 2020 Census, respondents could respond online or by phone in English and 12 non-English languages (Spanish, Chinese, Vietnamese, Korean, Russian, Arabic, Tagalog, Polish, French, Haitian Creole, Portuguese, and Japanese).
The Census Bureau also provided additional language support materials in 59 non-English languages. This includes both video and print language guides. Video language guides, narrated in 59 non-English languages, assist respondents in responding online. Print language guides, written in 59 non-English languages, assist respondents in filling out the paper questionnaire. By providing internet and telephone response options in English and 12 non-English languages and language guides in 59 non-English languages, the Census Bureau provided support to over 99% of all U.S. households.
This presentation will focus on how the Census Bureau developed each of the aforementioned components and will highlight preliminary findings from the 2020 Census language support program.
Small Area Estimation Development and Dissemination for US PIAAC
Thomas Krenzke, Westat
Abstract: The term small area estimation (SAE) refers to a variety of methods or statistical techniques to produce estimates or, more precisely, estimate parameters for sub-populations or smaller areas of interest. SAE uses survey data in combination with correlated data at the small-area level from other sources to model the estimates of interest. The model-dependent approach was used to produce model-based estimates for states and counties, for which PIAAC data are insufficient for direct estimation. The models use PIAAC survey data in conjunction with correlated data across geographic areas from the U.S. Census Bureau’s American Community Survey (ACS) to produce reliable “indirect” estimates. The estimates are thus predictions of how the adults in a state or county would have performed had they been administered the PIAAC assessment. This presentation provides an overview of the methodology used for the development of indirect estimates for states and counties and provides a demonstration of the PIAAC Mapping Tool. The tool allows a user to explore, compare and analyze the indirect estimates. The tool provides map displays for eight PIAAC outcomes, summary cards for states and counties, and a comparison tool that allows one to conduct statistical tests.
A Sample Management Service for Cross-National Internet Surveys
Geneviève Michaud, Sciences Po
Quentin Agren, Sciences Po
Rory Fitzgerald, ESS
Gianmaria Bottoni, ESS
Agnalys Michaud, Sciences Po
Abstract: In the framework of the work package “innovations in data production”, part of the European programme Horizon 2020 Social Sciences Open Cloud (SSHOC, grant 823782) as well as ESS-SUSTAIN-2 (grant 871063), Sciences Po has been collaborating with the European Social Survey (ESS ERIC) since 2019 to provide a tool to manage cross-national samples for longitudinal internet surveys. Based on the knowledge gained during prior experiments, we identified several challenges that such a software should meet.
The solution should first deal with the complexity implied by a cross-national sample, and allow to coordinate and harmonize the data collection process across participating countries. This involves several aspects: multilingualism, synchronization and collaboration.
The solution should as well comply with ethics and legal rules regarding personal data protection. It should collect data frugally, and address data access or deletion requests. It should as well include several roles and corresponding accreditations to access specific data subsets while facilitating central management.
The solution should accommodate limited resources, be it financial resources or skill sets. Ultimately, the solution should be responsive and users-friendly, providing extensive online training material and comprehensive users guides. We have come to the conclusion that no off-the -shelf software solution meets our needs.
Since 2019, we have been designing and developing a software application to take up these challenges. During the session, we will run through a lively demonstration of the resulting fully functional web panel sample service (WPSS) paired through an Application Programming Interface (API) with the Qualtrics software platform. Following a realistic scenario, we will play each stakeholder’s role and use its features.
Blaise’s Case Management Application (CMA) Demo
Gina Cheung, University of Michigan-SRC
Lon Hofman, CBS
Jennifer Kelley, University of Michigan-SRC
Video 1 – Overview of CMA Project
Video 2 – CMA Demo