Data Collection Challenges and Solutions
Have questions or comments for the authors? Click here to post a question or comment about this session. Please specify which author/presentation your post is addressing.
Modularising the European Working Conditions Survey to Reduce Interview Duration
Christopher White, Eurofound
Abstract: As a result of the emergence of Coronavirus and lockdowns, European Working Conditions Survey 2020 was interrupted in field. A new version has been designed using a telephone-interviewing methodology with data to be collected in Q2 2021.
The change in data collection mode necessitated shortening interview duration by more than 50%, from 45 to under 20 minutes. At the same time, we sought to retain as many questions as possible to enable the same possibilities for analysis, while adding a few new questions relating to Coronavirus.
We have developed a system to convert our existing questionnaire by using a split-questionnaire design (SQD) approach in which every respondent skips some questions. We plan to impute the missing data, and to compare the estimates with a two-phase sampling weighting approach.
In this paper we present reasons for changing the questionnaire and explore the tough choices we had to consider between different aspects of survey quality. We will discuss the approach taken, our plans to validate the statistical outputs, the lessons learned and their relevance for other surveys categories:
- Questionnaire development & testing
- Sampling Innovations
- Data collection challenges & solutions
When Do More Contact Attempts Become Too Many Contact Attempts? A Review of Recent CATI Outcomes in Europe
Alexandra Castillo, Pew Research Center
Abstract: A rigorous callback strategy is often considered to be essential in reaching harder-to-reach subgroups as part of general population survey design. However, without empirical investigation, the optimal number of contact attempts to improve sample representativeness and point-estimate accuracy is not obvious. Moreover, within a longitudinal cross-national project, the callback protocol that works best for one country may not for another, and the preferred strategy at one point in time may be less advantageous at another.
Of course, there are potential methodological and practical tradeoffs. For instance, while additional contact attempts may improve overall sample balance, the extra effort may increase project costs, extend timelines and overextend interviewing staff. However, reducing effort could exacerbate sample biases – especially among younger and less educated segments of the population who can be more challenging to reach by phone – requiring remedies such as further weighting, which could reduce analytic power and overall project value.
To explore these questions, we reviewed a subset of European countries included in Pew Research Center projects since 2019, analyzing sample performance and attitudinal point-estimates by number of contact attempts across countries and over time. The presentation will focus on key demographic components of sample performance (e.g., gender, age and educational attainment), substantive outcomes that typically lead our global reporting and operational costs (e.g., number of calls needed to achieve an interview and the effect of different callback designs on survey outcome rates). Lastly, we will highlight any associated learnings related to fielding during the ongoing COVID-19 pandemic.
Lessons Learned From Eurofound’s Large-Scale Living, Working and COVID-19 E-Survey
Daphne Ahrendt, Eurofound
Eszter Sandor, Eurofound
Massimiliano Mascherini, Eurofound
Tadas Leoncikas, Eurofound
Abstract: In April 2020, Eurofound launched a large-scale online survey to capture how the COVID-19 pandemic was affecting people’s lives and work. Entitled Living, working and COVID-19, the aim of the survey is to investigate the impact on well-being, work and telework and on the financial situation of people across the European Union. It includes a range of questions relevant to people across various age groups and life situations. Most of the questions are based on Eurofound’s European Quality of Life Survey (EQLS) and European Working Conditions Survey (EWCS), while other questions are new or were adapted from other sources such as the EU Statistics on Income and Living Conditions (EU-SILC).
Respondents were recruited via uncontrolled convenience sampling, specifically by publishing the link to the survey on social media and distributing it among Eurofound’s contacts and stakeholders, complemented by social media advertising, targeting hard-to-reach groups.
To date, two rounds of the survey have been fielded, with a third round in preparation.
In the second round a panel element was introduced. Round 1 collected email addresses from respondents interested in participation in further survey rounds (in compliance with GDPR). As an incentive to participate in Round 2, panel respondents were asked whether they would like to receive a personalised report of the results in Round 1.
The e-survey has been an interesting pilot project for Eurofound to test a new and for Eurofound non-orthodox data-collection method, while still covering the entire EU.
In our presentation we will discuss some of the methodological challenges associated with the design of this non-probabilistic survey, explain weighting applied, and present the lessons learned to date. We will also present an initial assessment of the panel component of the e-survey and of the impact of follow-ups and appeals to complete the survey in increasing panel participation rates.
Survey Mode and Fieldwork Effort – Their Impact on the Quality of Measurement. A Meta-analysis Based on the Data from ISSP 1996-2015
Adam Rybak, Adam Mickiewicz University in Poznan
Abstract: The situation of constantly raising nonresponse poses a threat to the whole surveying industry. However, the strive for higher participation may result in counter productiveness. Additional fieldwork effort could just boost the cost or, in certain circumstances, worsen the sample quality.
Therefore, I examined the relationship between survey characteristics on the one side, response rate, and nonresponse bias on the other. Since there are no straightforward methods to estimate unit nonresponse bias, I utilized the internal criterium proposed by Sodeur (concerning the proportion of genders in two-partners households). My analysis is based upon the survey documentation and results from the International Social Survey Programme 1996-2015.
I created random-effects meta-regression models, based on data from 20 ISSP waves conducted across each continent, in order to estimate the impact of such variables as a number of contacts with the respondent, advance letters, material incentives, back-checking, interviewer incentives, survey mode used (or the mix of modes) on the quality of sample and response rate. This analysis’s primary value lies in the amount of standardized data collected and included in models, which exceeds the usual size of evidence used in most of the similar research.