Here is the CMT Uptime check phrase

Interviewer Effects and Monitoring Interviewers


Have questions or comments Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.


Presentations


Tackling Undesirable Interviewer Behavior in the ESS
Roberto Briceno-Rosas, GESIS – Leibniz Institute for the Social Sciences
Joost Kappelhof, The Netherlands Institute for Social Researc/SCP
Daniela Ackermann-Piek, GESIS – Leibniz Institute for the Social Sciences
May Doušak, University of Ljubljana
Rebekka Kluge, GESIS – Leibniz Institute for the Social Sciences
Jannine van de Maat, The Netherlands Institute for Social Researc/SCP
Paulette Flore, The Netherlands Institute for Social Researc/SCP


Abstract: Interviewers are an integral part of any face-to-face survey. They can affect both the measurement and the representation dimension of the Total Survey Error framework (TSE, Groves et al.2009). While it’s obvious that undesirable interviewer behavior (UIB) on either or both of these dimensions of the TSE-framework can affect the accuracy of estimates, UIB will also affect the comparability of estimates in case of multinational, multiregional or multicultural (3MC) surveys. Hence, reducing UIB becomes an even more urgent area of attention for a face-to-face 3MC survey.

The European Social Survey (ESS) is a bi-annual cross-national face-to-face survey that has been conducted across Europe since 2001. Its aim is to measure attitudes, beliefs and behavior patterns in a changing world, and to improve survey methodology in cross-national studies. Currently round 10 is ongoing. In order to ensure that the potential for UIB and its adverse effects on data quality is kept to a minimum, the ESS has developed a new work package aimed to tackle this issue from a holistic point of view. It aims to provide quality assurance (QAssu) during the preparatory phase, quality control (QC) during the data collection phase and quality assessment (QAsess) at the post-data collection phase of the survey life cycle. This approach should allow the ESS to prevent, detect and assess interviewer related issues affecting the ESS data quality.

We present the ESS approach to minimizing UIB and describe in detail the challenges we face in order to be able to do effective QAssu, QC and QAsess in a timely and comparable way as well as present some results. We also discuss the impact that COVID-19 has had on the role of the interviewer in the ESS so far and the expected challenges ahead in a post-pandemic survey landscape.


Live Monitoring of Progress, Performance and Quality of Multi-Country CAPI Fieldwork
Jamie Burnett, Kantar
Kateryna Stelmakh, Kantar


Abstract: A real driver of quality in face to face surveys is the work done by the interviewers during the data collection phase. Whilst a large amount of effort goes into training interviewers and back checking the data collected there has been little visibility over the quality of their work. With the increasing use of capi machines and electronic contact sheets we now have much greater opportunity to quality assure the data in real time and address any issues that might compromise quality (e.g. selection errors) during rather than after fieldwork. To this end, Kantar have developed an online reporting tool which enables near real-time monitoring of CAPI fieldwork by local field, central coordination team, and client management teams. The objective of the tool is to provide detailed information on 3 key metrics – progress, performance and quality whilst fieldwork progresses, therefore enabling Kantar to act in a timely fashion if the contact strategy is not being met or interviews are below the required quality standards and need to be replaced. The tool allows visibility of key performance and fieldwork metrics via a summarised web-enabled Power BI dashboard built based on a combination of sample data, respondent data and contact logs including GPS fixes, which are collected at each contact attempt. The consistent automated daily extraction of data across multiple markets allows teams to address quality concerns in real time and leads to significantly reduced effort when reporting the status of the fieldwork to the clients.


PMA Data Quality Dashboard: A Business Analytics Solution for Interviewer Error Monitoring
Shulin Jiang, Johns Hopkins Bloomberg School of Public Health


Abstract: Interviewers play a fundamental role in survey operation and data quality. The Total Survey Error (TSE) framework describes that interviewers can affect the survey process by varying in coverage, nonresponse, measurement, and processing errors.  To minimize interviewers’ errors, which are usually specific at the individual level, best practices suggest rigorous and routine monitoring, along with corrective procedures, all within budget constraints. Therefore, it is essential to develop a comprehensive, real-time, and low-cost monitoring system at the interviewer level.

Performance Monitoring for Action (PMA, formerly PMA2020) is a multinational, multicultural, and multiregional (3MC) survey platform operating in 11 countries to assess various health and development topics. It has recruited and trained over 1,500 female interviewers who reside near the enumeration areas, using smartphones to conduct computer-assisted personal interviews (CAPI). To deliver high-quality data, PMA has implemented numerous data quality indicators to routinely monitor household and facility surveys in all geographies. These indicators are derived from multiple sources, including survey data, item-level time-stamped paradata, and GIS data. 

However, to process and represent these quality indicators, statistical tools often fail to offer a user-friendly, real-time, low cost, and comprehensive way for survey operators to interpret these results. Given this gap, we have found that business intelligence tools, like Microsoft Power BI, offer the best solution for this demand of data analytics and visualizations in data quality monitoring at the interviewer level.

In this presentation, we will demonstrate dashboard development from theory to implementation – including our quality indicators under the TSE framework, the logarithm of interviewer error detection, and how we were able to make this information actionable into a data quality dashboard to resolve urgent data quality issues and improve interviewers’ training outcomes.

[learn_press_profile]