Back Upcoming RECSM Webinars

Upcoming RECSM Webinars

03.12.2020

 

We are pleased to announce RECSM upcoming webinars

 

Upcoming webinars 

March 31, 2022, at 2pm (online)

“What do I do with these images?”: A practical guide to the classification of images sent by survey participants

Presenter: Patricia A. Iglesias (RECSM)

Authors: Patricia A. Iglesias, Carlos Ochoa and Melanie Revilla

Abstract:

Requesting for images to survey respondents is a practice that has gained notoriety during the last years. Although this new collecting strategy may offer a plethora of advantages, it requires researchers to know how to process and analyze this new type of data, which is not an extended expertise among survey practitioners.

 

This webinar aims to provide guidance to researchers inexperienced in image analysis on the main concepts involved in the process of classifying images, and the possibilities to develop such analysis. Furthermore, we will present the main factors that researchers should take into account when deciding how to classify images, focusing on how they should be assessed in order to choose the most suitable classification method. Some practical examples will be reviewed to gain perspective on how to deal with those factors and the decision process.

All these elements should help survey practitioners interested in requesting images to decide if they are in a position to analyze them, and, if they do, which is the most suitable option for them.

Link to Zoom meeting

Join via Zoom: ID 838 8790 5779

https://upf-edu.zoom.us/j/83887905779

 

Concurrent, Sequential or Web-Only? Evidence from a mixed-mode recruitment experiment in FReDA

Presenter: Pablo Christmann (RECSM, GESIS - Leibniz Institute for the Social Sciences)

January 25, 2022 at 13h (CET).

Abstract: 

The COVID-19 pandemic impacts the operation process of many survey programs, among them the recruitment for the newly established German panel study FReDA (Family Research and Demographic Analysis). Switching from face-to-face to self-administered mixed-modes (web, paper) for the recruitment phase has allowed us to experimentally test the effectiveness of different mode choices based on a gross sample of 108,000 register-based addresses. We investigate how different mode choice strategies affect the response rate, distributions of substantive answers, sample composition, data quality, panel consent and participation in the subsequent wave. 

We implemented three different experimental conditions to which individuals were assigned randomly. FReDA deploys an invitation letter and reminders that offer either an access-link/QR-Code to the web survey (CAWI), or containing the paper-based questionnaire and an access-link/QR-Code to the web survey (CAWI/PAPI) in different sequences to contact the target population. Individuals are contacted either with 

(1) a concurrent contact strategy in the sequence CAWI/PAPI, CAWI, CAWI/PAPI, or 

(2) a sequential contact strategy in the sequence CAWI, CAWI, CAWI/PAPI, or 

(3) a sequential contact strategy in the sequence CAWI, CAWI, CAWI, CAWI/PAPI. 

By design, the third condition also allows us to simulate and compare how the recruitment would have performed in a web-only mode with one invitation letter and two reminders.


Join this webinar via Zoom:

https://upf-edu.zoom.us/j/81189909943

 

Past webinars

A New Experiment On The Use Of Images To Answer Web Survey Questions.

Presenter: Oriol Bosch(RECSM-Universitat Pompeu Fabra)

Abstract:

Taking and uploading images may provide richer and more objective information than textbased answers to open-ended survey questions. Thus, recent research started to explore the use of images to answer web survey questions. However, very little is known yet about the use of images to answer web survey questions and its impact on four aspects: break-off, item nonresponse, completion time, and question evaluation. Besides, no research has explored the effect of adding a specific motivational message encouraging participants to upload images, nor of the device used to participate, on these four aspects. This study addresses three research questions: 1. What is the effect of answering web survey questions with images instead of text on these four aspects? 2. What is the effect of including a motivational message on these four aspects? 3. How PCs and smartphones differ on these four aspects? To answer these questions, we conducted a web survey experiment (N = 3,043) in Germany using an opt-in access online panel. Our target population was the general German population aged between 18-70 years living in Germany. Half of the sample was required to answer with smartphones and the other half with PCs. Within each device group, respondents were randomly assigned to 1) a control group answering open-ended questions with text, 2) a first treatment group answering open-ended questions with images, and 3) a second treatment group answering with images but prompted with a motivational message. Overall, results show higher break-off and item nonresponse rates, as well as lower question evaluation for participants answering with images. Motivational messages slightly reduce item nonresponse. Finally, participants completing the survey with a PC present lower break-off rates but higher item nonresponse. To our knowledge, this is the first study that experimentally investigates the impact on breakoff, item nonresponse, completion time, and question evaluation of asking respondents to answer open-ended questions with images instead of text. We also go one step further by exploring 1) how motivational messages may improve respondent’s engagement with the survey and 2) the effect of the device used to answer on these four aspects.

October 13, 2020 at 11h via Zoom.

 

Estimating the size of measurement errors of the “Satisfaction With Democracy” Survey Indicator for different scales, countries and languages.

Presenter: Carlos Poses(RECSM-Universitat Pompeu Fabra)

Abstract:

The Satisfaction With Democracy (SWD) indicator is often used in social research. However, while there is some debate about which concept it measures, the discussion about the size of its measurement errors (how well it measures the underlying concept) is scarce. Nonetheless, measurement errors can affect the results and threaten comparisons across studies, scales, countries and languages. Thus, we estimated the “measurement quality” of the SWD indicator for seven response scales across 38 countrylanguage groups, using three multitrait-multimethod experiments from the European Social Survey. Measurement quality is a statistical measure of how well a question measures the underlying concept of interest and it is the complement of measurement errors. Results show that measurement errors explain, on average across countries, from 16% (11-point scale) to 54% (4-point scale) of the variance in the observed responses. We also provide insights to improve questionnaire design, and evaluate if standardized relationships of this indicator with other variables can be compared across scales, countries and languages.

November 17, 2020 at 11h via Zoom.

 

Title: Linguistic complexity of survey questions

Presenter: Diana Zavala-Rojas (RECSM-Universitat Pompeu Fabra)

Article co-authored with Toni Badia and Carme Colominas (GliCom - UPF) 
 

Abstract:

A common issue when drafting a survey questionnaire is: how to assess whether a survey question is linguistically complex? Almost every manual in questionnaire design emphasizes the need to avoid complex wording. Despite recommending different methodologies to validate the design of survey questionnaires, academic literature provides little practical information on how to decide whether a survey item is complex or not. In this research, we use the frameworks that model readability in linguistics and the response process to surveys to select indicators calculated directly on the survey texts. We use those indicators to estimate how complex a question is and provide a score on the range of [-1,1], where higher values indicate larger complexity, to a corpus of questions in English from the European Social Survey.

December 15, 2020 at 11h via Zoom.

 

Title: Open question formats: Comparing the suitability of requests for text and voice answers in smartphone surveys.

Presenter: Jan Karem Höhne (University of Mannheim, RECSM-Universitat Pompeu Fabra). 

Article co-authored with:

Annelies Blom (University of Mannheim)

Konstantin Gavras (University of Mannheim)

Melanie Revilla (RECSM-Universitat Pompeu Fabra)

Leonie Rettig (University of Mannheim)

Abstract:

While surveys provide important standardized data about the population with large samples, they are limited regarding the depth of the data provided. Although surveys can offer open answer formats, the completeness of and detail provided in these formats is often limited, particularly in self-administered web surveys, for several reasons: On the one hand, respondents find it difficult to express their attitudes in open answer formats by keying in the answers. Respondents also keep their answers short or skip such questions altogether. On the other hand, survey designers seldom encourage respondents to elaborate on their open answers, because the ensuing coding and analysis have long been conducted manually. This makes the process time-consuming and expensive, reducing the attractiveness of such formats. However, technological developments for surveys on mobile devices, particularly smartphones, offer the collection of voice instead of text answers, which may facilitate answering questions with open answer formats and provide richer data. Additionally, new developments in automated speech-to-text transcription and text coding and analysis allow the proper handling of open answers from large-scale surveys. Given these new research opportunities, we address the following research question: How do requests for voice answers, compared to requests for text answers, affect response behavior and survey evaluations in smartphone surveys? We conducted an experiment in a smartphone survey (N = 2,400) using the opt-in Omninet Panel (Forsa) in Germany in December 2019 and January 2020. From their panel, Forsa drew a quota sample based on age, education, gender, and region (East and West Germany) to match the German population on these demographic characteristics. To collect respondents’ voice answers, we developed the JavaScriptand PHP-based “SurveyVoice (SVoice)” tool that records voice answers via the microphone of smartphones. We randomly assign respondents to answer format conditions (i.e., text or voice) and ask them six questions dealing with the perception of the most important problem in Germany as well as attitudes towards the current German Chancellor and several German political parties. In this study, we compare requests for text and voice answers in smartphone surveys with respect to several aspects: First, we investigate item nonresponse (i.e., item missing data) as an indicator of primarily low data quality. Second, we investigate response times (i.e., the time elapsing between question presentation on the screen and the time until the survey page was submitted) as an indicator of respondent burden. Finally, we investigate respondents’ survey evaluations (i.e., level of interest and level of difficulty stated by respondents) as an indicator of survey satisfaction. This experiment aims to test the feasibility of collecting voice answers for open-ended questions as an alternative data source in contemporary smartphone surveys. In addition, it explores whether and to what extent voice answers collected through the built-in microphones, compared to open answers entered via the keyboard of smartphones, represent a sound methodological substitute.

January 19, 2021 at 11h via Zoom.

 

Title: ​Are you paying for or with quality? Survey participation due to monetary incentives and measurement quality – Evidence from the GESIS Panel.

Presenter: Hannah Schwarz. (RECSM-Universitat Pompeu Fabra)

Abstract:

In times of decreasing response rates, monetary incentives are increasingly used to motivate individuals to participate in surveys. Receiving an incentive can affect respondents’ motivation to take a survey and, consequently, their survey taking behaviour. On the one hand, the resulting extrinsic motivation might undermine intrinsic motivation thus leading respondents to invest less effort into answering a survey. On the other hand, monetary incentives could make respondents more eager to invest effort into answering a survey, as they feel they are compensated for doing so. This study aims to assess whether there are differences in measurement quality between respondents who are motivated to take surveys due to the received incentive and respondents for who this is not a reason for participation. We implemented two Multitrait-Multimethod (MTMM) experiments in the probability-based GESIS Panel in Germany (2019) to be able to estimate the measurement quality of 18 questions asked to panelists. By coding panelists’ open answers to a question about their reasons for participation, we distinguish panelists who state that they are motivated by the incentive from those who do not. We analyse the MTMM experiments for these two groups separately and compare the resulting measurement quality estimates. 

February 23, 2021 at 11h via Zoom.

 

Title: Affective polarization: its measurement in multi-party contexts and its relationship with ideology.

Presenter: Josep Maria Comellas (RECSM - Universitat Pompeu Fabra)

Mariano Torcal (RECSM Director - Universitat Pompeu Fabra)

Abstract:

Affective polarization broadly refers to the extent that individuals feel sympathy towards in-groups and antagonism towards out-groups. While this topic has been extensively studied in the United States, affective polarization has increasingly received comparative attention in an attempt to study this phenomenon in multi-party settings. In the first part of the presentation, we revise some of the main indices proposed in the literature to measure affective polarization, and we explain the ones that we have implemented using different datasets (CNEP, CSES, E-DEM). Then, in the second part, we present a paper that is focused on the relationship between ideology and affective polarization. Concretely, we test the predominance of identity over issues in explaining affective polarization in a multi-party system, taking advantage of an original panel dataset (E-DEM, 2018-2019) collected in Spain. The main results show that ideological identity and affective polarization strongly reinforce each other over time, polarizing society in identity terms but no so much due to conflicts emerging for issue positioning and sorting. Issue-based ideology exerts more modest affective polarizing effects, and only among those individuals whose positions in concrete issues are quite in line with their ideological identity.

March 16, 2021 at 13h (CET) via Zoom.

 

Title: [MCSQ]: The Multilingual Corpus of Survey Questionnaires.

Presenter: Danielly Sorato  (RECSM-Universitat Pompeu Fabra)

Abstract: 
The Multilingual Corpus of Survey Questionnaires (MCSQ) is the first publicly available corpus of international survey questionnaires, comprising survey items from the European Social Survey (ESS), European Values Study (EVS), and the Survey of Health Ageing and Retirement in Europe (SHARE). The recently released Version 2.0 (entitled Mileva Marić-Einstein) is composed of questionnaires of the aforementioned studies in the (British) English source language and their translations into eight languages, namely Catalan, Czech, French, German, Norwegian, Portuguese, Spanish, and Russian, as well as 29 language varieties (e.g. Swiss-French). The MCSQ is a relevant digital artefact, that allows researchers in the fields of social sciences and linguistics to quickly search and compare survey items in a myriad of languages.
The MCSQ was developed in the SSHOC (Social Sciences & Humanities Open Cloud) Project. It forms part of the EU Horizon 2020 Research and Innovation Programme (2014-2020) and is conducted under Grant Agreement  No. 823782.
​The digitalized survey items are an interesting resource for survey research, translation studies, lexicology, among others. In this seminar, we present the corpus characteristics and showcase applications of the MCSQ. 
 

April 6, 2021 at 15h (CET) via Zoom.

 

Title: Mobile optimization strategies: effects on web survey participation.

Presenter: Marc Asensio  (RECSM-Universitat Pompeu Fabra)

Abstract: 
In recent years, smartphone penetration has continuously increased around the world and a growing part of the population relies exclusively on smartphones to access the internet. Coverage rates for web surveys are better than ever, but to maximize response rates, efforts must be made to adapt survey designs to accommodate ‘smartphone dependent’ participants.  Furthermore, to capitalize on new data collection opportunities offered by mobile devices, there is an interest in actively encouraging and normalizing responding to surveys on mobiles in the general population. A range of mobile optimization strategies are available for this purpose, but not much is known about their relative effectiveness and impact on survey costs and errors. We address this question in the present study, through a comparison of three strategies to optimize mobile device experience in web surveys, which were implemented in a probability-based, three-wave election study conducted in Switzerland in 2019: (1) the standard approach of providing a URL to a browser-based survey and optimizing the display of the questionnaire on smartphones (N= 8000); (2) adapting the invitation to promote mobile response and providing a QR code to access the survey (N= 1088); and (3) providing a QR code to download and participate via a smartphone application (N=1087). We compare the three mobile optimization strategies to draw conclusions about their relative impact on a) response rates and sample composition – overall and on mobile devices; b) estimates for target survey variables; and c) the progression of fieldwork, to draw conclusions about which strategy provides the best balance in terms of cost efficiency and representation of the target population. 
 

May 18, 2021 at 11h via Zoom.

 

Title: Willingness to participate in in-the-moment surveys triggered by online behaviours

Presenter: Carlos Ochoa (RECSM-Universitat Pompeu Fabra)

Abstract: 

Surveys are a fundamental tool of empirical research, but they suffer from errors: in particular, respondents can have difficulties recalling information of interest for researchers. Latest technological developments offer new opportunities to collect data passively (i.e., without participant’s intervention), avoiding errors of recall. Registering online behaviours (e.g., visited URLs) by means of a ‘meter’ software voluntarily installed by a sample of individuals on their browsing devices, is one of these opportunities. However, metered data is also affected by errors and cannot cover all the information of interest. Asking participants about such missing information by means of web surveys conducted in the precise moment an event of interest is detected by the meter has the potential to fill the gap. However, this method requires participants to be willing to participate.

In this webinar, the results of recent research on the willingness to participate in in-the-moment web surveys triggered by metered data will be presented. A conjoint experiment implemented in an opt-in metered panel in Spain (N=804) revealed overall high levels of willingness to participate, ranging from 69% to 95%, depending on the conditions offered to participants. The main aspects affecting this willingness are related to the incentives offered. Differences across participants were observed for household size, education, and personality traits. Answers to open questions also confirmed that the incentive is the key driver to decide to participate, while other potential problematic aspects such as the limited time to participate, privacy concerns, and discomfort caused by being interrupted play a limited role.

Finally, participants were also asked about their preferences in the method used to be invited to participate in in-the-moment surveys. The results showed that panelists are willing to accept several invitation methods, being those using smartphones the ones obtaining higher levels of acceptance and coverage, as well as the ones that most panelists consider they would see first.

November 23, 2021 at 12h via Zoom.

 

 

Title: Adjusting to the survey: How interviewer experience relates to interview duration

Presenter: André Pirralha (RECSM-Universitat Pompeu Fabra)

Abstract: 

Interviewers are important actors in telephone surveys. Several studies have shown interviewers' importance in determining the interview pace and managing the effort respondents dedicate to answering. On the other hand, we also know that interviewers are very heterogeneous regarding the duration of the interviews and that the time dedicated to each interview tends to shorten over the course of fieldwork. While several hypotheses have been discussed in the literature, it is often argued that interviewers show a learning effect and optimize survey administration as they gain within-survey experience.

This paper examines the relationship between general survey experience, within-survey experience and interview duration using data from wave 1 of the parents Computer-Assisted Telephone Interviews from the National Educational Panel Study (NEPS), Starting Cohort Grade 9. We employ multilevel models that show considerable influence of the interviewers on the interview duration and find that interview duration decreases as the within-survey experience increases. This effect is robust even after controlling for various interviewer, respondent, and interview characteristics.

December 14, 2021 at 12h (CET) via Zoom.

 
 

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact