Upcoming RECSM Webinars
We are pleased to announce RECSM upcoming webinars
March 31, 2022, at 2pm (online)
“What do I do with these images?”: A practical guide to the classification of images sent by survey participants
Presenter: Patricia A. Iglesias (RECSM)
Authors: Patricia A. Iglesias, Carlos Ochoa and Melanie Revilla
Requesting for images to survey respondents is a practice that has gained notoriety during the last years. Although this new collecting strategy may offer a plethora of advantages, it requires researchers to know how to process and analyze this new type of data, which is not an extended expertise among survey practitioners.
This webinar aims to provide guidance to researchers inexperienced in image analysis on the main concepts involved in the process of classifying images, and the possibilities to develop such analysis. Furthermore, we will present the main factors that researchers should take into account when deciding how to classify images, focusing on how they should be assessed in order to choose the most suitable classification method. Some practical examples will be reviewed to gain perspective on how to deal with those factors and the decision process.
All these elements should help survey practitioners interested in requesting images to decide if they are in a position to analyze them, and, if they do, which is the most suitable option for them.
Join via Zoom: ID 838 8790 5779
Concurrent, Sequential or Web-Only? Evidence from a mixed-mode recruitment experiment in FReDA
Presenter: Pablo Christmann (RECSM, GESIS - Leibniz Institute for the Social Sciences)
January 25, 2022 at 13h (CET).
The COVID-19 pandemic impacts the operation process of many survey programs, among them the recruitment for the newly established German panel study FReDA (Family Research and Demographic Analysis). Switching from face-to-face to self-administered mixed-modes (web, paper) for the recruitment phase has allowed us to experimentally test the effectiveness of different mode choices based on a gross sample of 108,000 register-based addresses. We investigate how different mode choice strategies affect the response rate, distributions of substantive answers, sample composition, data quality, panel consent and participation in the subsequent wave.
We implemented three different experimental conditions to which individuals were assigned randomly. FReDA deploys an invitation letter and reminders that offer either an access-link/QR-Code to the web survey (CAWI), or containing the paper-based questionnaire and an access-link/QR-Code to the web survey (CAWI/PAPI) in different sequences to contact the target population. Individuals are contacted either with
(1) a concurrent contact strategy in the sequence CAWI/PAPI, CAWI, CAWI/PAPI, or
(2) a sequential contact strategy in the sequence CAWI, CAWI, CAWI/PAPI, or
(3) a sequential contact strategy in the sequence CAWI, CAWI, CAWI, CAWI/PAPI.
By design, the third condition also allows us to simulate and compare how the recruitment would have performed in a web-only mode with one invitation letter and two reminders.
Join this webinar via Zoom:
A New Experiment On The Use Of Images To Answer Web Survey Questions.
Presenter: Oriol Bosch. (RECSM-Universitat Pompeu Fabra)
Taking and uploading images may provide richer and more objective information than textbased answers to open-ended survey questions. Thus, recent research started to explore the use of images to answer web survey questions. However, very little is known yet about the use of images to answer web survey questions and its impact on four aspects: break-off, item nonresponse, completion time, and question evaluation. Besides, no research has explored the effect of adding a specific motivational message encouraging participants to upload images, nor of the device used to participate, on these four aspects. This study addresses three research questions: 1. What is the effect of answering web survey questions with images instead of text on these four aspects? 2. What is the effect of including a motivational message on these four aspects? 3. How PCs and smartphones differ on these four aspects? To answer these questions, we conducted a web survey experiment (N = 3,043) in Germany using an opt-in access online panel. Our target population was the general German population aged between 18-70 years living in Germany. Half of the sample was required to answer with smartphones and the other half with PCs. Within each device group, respondents were randomly assigned to 1) a control group answering open-ended questions with text, 2) a first treatment group answering open-ended questions with images, and 3) a second treatment group answering with images but prompted with a motivational message. Overall, results show higher break-off and item nonresponse rates, as well as lower question evaluation for participants answering with images. Motivational messages slightly reduce item nonresponse. Finally, participants completing the survey with a PC present lower break-off rates but higher item nonresponse. To our knowledge, this is the first study that experimentally investigates the impact on breakoff, item nonresponse, completion time, and question evaluation of asking respondents to answer open-ended questions with images instead of text. We also go one step further by exploring 1) how motivational messages may improve respondent’s engagement with the survey and 2) the effect of the device used to answer on these four aspects.
October 13, 2020 at 11h via Zoom.
Estimating the size of measurement errors of the “Satisfaction With Democracy” Survey Indicator for different scales, countries and languages.
Presenter: Carlos Poses. (RECSM-Universitat Pompeu Fabra)
The Satisfaction With Democracy (SWD) indicator is often used in social research. However, while there is some debate about which concept it measures, the discussion about the size of its measurement errors (how well it measures the underlying concept) is scarce. Nonetheless, measurement errors can affect the results and threaten comparisons across studies, scales, countries and languages. Thus, we estimated the “measurement quality” of the SWD indicator for seven response scales across 38 countrylanguage groups, using three multitrait-multimethod experiments from the European Social Survey. Measurement quality is a statistical measure of how well a question measures the underlying concept of interest and it is the complement of measurement errors. Results show that measurement errors explain, on average across countries, from 16% (11-point scale) to 54% (4-point scale) of the variance in the observed responses. We also provide insights to improve questionnaire design, and evaluate if standardized relationships of this indicator with other variables can be compared across scales, countries and languages.
November 17, 2020 at 11h via Zoom.
Title: Linguistic complexity of survey questions
Presenter: Diana Zavala-Rojas (RECSM-Universitat Pompeu Fabra)
A common issue when drafting a survey questionnaire is: how to assess whether a survey question is linguistically complex? Almost every manual in questionnaire design emphasizes the need to avoid complex wording. Despite recommending different methodologies to validate the design of survey questionnaires, academic literature provides little practical information on how to decide whether a survey item is complex or not. In this research, we use the frameworks that model readability in linguistics and the response process to surveys to select indicators calculated directly on the survey texts. We use those indicators to estimate how complex a question is and provide a score on the range of [-1,1], where higher values indicate larger complexity, to a corpus of questions in English from the European Social Survey.
December 15, 2020 at 11h via Zoom.
Title: Open question formats: Comparing the suitability of requests for text and voice answers in smartphone surveys.
Presenter: Jan Karem Höhne (University of Mannheim, RECSM-Universitat Pompeu Fabra).
Article co-authored with:
Annelies Blom (University of Mannheim)
Konstantin Gavras (University of Mannheim)
Melanie Revilla (RECSM-Universitat Pompeu Fabra)
Leonie Rettig (University of Mannheim)
Title: Are you paying for or with quality? Survey participation due to monetary incentives and measurement quality – Evidence from the GESIS Panel.
Presenter: Hannah Schwarz. (RECSM-Universitat Pompeu Fabra)
In times of decreasing response rates, monetary incentives are increasingly used to motivate individuals to participate in surveys. Receiving an incentive can affect respondents’ motivation to take a survey and, consequently, their survey taking behaviour. On the one hand, the resulting extrinsic motivation might undermine intrinsic motivation thus leading respondents to invest less effort into answering a survey. On the other hand, monetary incentives could make respondents more eager to invest effort into answering a survey, as they feel they are compensated for doing so. This study aims to assess whether there are differences in measurement quality between respondents who are motivated to take surveys due to the received incentive and respondents for who this is not a reason for participation. We implemented two Multitrait-Multimethod (MTMM) experiments in the probability-based GESIS Panel in Germany (2019) to be able to estimate the measurement quality of 18 questions asked to panelists. By coding panelists’ open answers to a question about their reasons for participation, we distinguish panelists who state that they are motivated by the incentive from those who do not. We analyse the MTMM experiments for these two groups separately and compare the resulting measurement quality estimates.
February 23, 2021 at 11h via Zoom.
Title: Affective polarization: its measurement in multi-party contexts and its relationship with ideology.
Affective polarization broadly refers to the extent that individuals feel sympathy towards in-groups and antagonism towards out-groups. While this topic has been extensively studied in the United States, affective polarization has increasingly received comparative attention in an attempt to study this phenomenon in multi-party settings. In the first part of the presentation, we revise some of the main indices proposed in the literature to measure affective polarization, and we explain the ones that we have implemented using different datasets (CNEP, CSES, E-DEM). Then, in the second part, we present a paper that is focused on the relationship between ideology and affective polarization. Concretely, we test the predominance of identity over issues in explaining affective polarization in a multi-party system, taking advantage of an original panel dataset (E-DEM, 2018-2019) collected in Spain. The main results show that ideological identity and affective polarization strongly reinforce each other over time, polarizing society in identity terms but no so much due to conflicts emerging for issue positioning and sorting. Issue-based ideology exerts more modest affective polarizing effects, and only among those individuals whose positions in concrete issues are quite in line with their ideological identity.
March 16, 2021 at 13h (CET) via Zoom.
Title: [MCSQ]: The Multilingual Corpus of Survey Questionnaires.
Presenter: Danielly Sorato (RECSM-Universitat Pompeu Fabra)
The MCSQ was developed in the SSHOC (Social Sciences & Humanities Open Cloud) Project. It forms part of the EU Horizon 2020 Research and Innovation Programme (2014-2020) and is conducted under Grant Agreement No. 823782.
The digitalized survey items are an interesting resource for survey research, translation studies, lexicology, among others. In this seminar, we present the corpus characteristics and showcase applications of the MCSQ.
April 6, 2021 at 15h (CET) via Zoom.
Title: Mobile optimization strategies: effects on web survey participation.
Presenter: Marc Asensio (RECSM-Universitat Pompeu Fabra)
May 18, 2021 at 11h via Zoom.
Title: Willingness to participate in in-the-moment surveys triggered by online behaviours
Presenter: Carlos Ochoa (RECSM-Universitat Pompeu Fabra)
Surveys are a fundamental tool of empirical research, but they suffer from errors: in particular, respondents can have difficulties recalling information of interest for researchers. Latest technological developments offer new opportunities to collect data passively (i.e., without participant’s intervention), avoiding errors of recall. Registering online behaviours (e.g., visited URLs) by means of a ‘meter’ software voluntarily installed by a sample of individuals on their browsing devices, is one of these opportunities. However, metered data is also affected by errors and cannot cover all the information of interest. Asking participants about such missing information by means of web surveys conducted in the precise moment an event of interest is detected by the meter has the potential to fill the gap. However, this method requires participants to be willing to participate.
In this webinar, the results of recent research on the willingness to participate in in-the-moment web surveys triggered by metered data will be presented. A conjoint experiment implemented in an opt-in metered panel in Spain (N=804) revealed overall high levels of willingness to participate, ranging from 69% to 95%, depending on the conditions offered to participants. The main aspects affecting this willingness are related to the incentives offered. Differences across participants were observed for household size, education, and personality traits. Answers to open questions also confirmed that the incentive is the key driver to decide to participate, while other potential problematic aspects such as the limited time to participate, privacy concerns, and discomfort caused by being interrupted play a limited role.
Finally, participants were also asked about their preferences in the method used to be invited to participate in in-the-moment surveys. The results showed that panelists are willing to accept several invitation methods, being those using smartphones the ones obtaining higher levels of acceptance and coverage, as well as the ones that most panelists consider they would see first.
November 23, 2021 at 12h via Zoom.
Title: Adjusting to the survey: How interviewer experience relates to interview duration
Presenter: André Pirralha (RECSM-Universitat Pompeu Fabra)
Interviewers are important actors in telephone surveys. Several studies have shown interviewers' importance in determining the interview pace and managing the eﬀort respondents dedicate to answering. On the other hand, we also know that interviewers are very heterogeneous regarding the duration of the interviews and that the time dedicated to each interview tends to shorten over the course of ﬁeldwork. While several hypotheses have been discussed in the literature, it is often argued that interviewers show a learning eﬀect and optimize survey administration as they gain within-survey experience.
This paper examines the relationship between general survey experience, within-survey experience and interview duration using data from wave 1 of the parents Computer-Assisted Telephone Interviews from the National Educational Panel Study (NEPS), Starting Cohort Grade 9. We employ multilevel models that show considerable inﬂuence of the interviewers on the interview duration and ﬁnd that interview duration decreases as the within-survey experience increases. This eﬀect is robust even after controlling for various interviewer, respondent, and interview characteristics.
December 14, 2021 at 12h (CET) via Zoom.