New research report examines predictors of trust in governmental health communication on Covid-19

David Schieferdecker, Natalie Berger, and I published a new research report in the European Journal of Health Communication (available in open access here). Using representative survey data from Germany our study reveals significant shifts in public trust regarding Covid-19 information provided by the government. Initially, trust levels were high but declined sharply as the pandemic progressed, never fully returning to their early heights. This drop may reflect the fading of a “rally around the flag” effect—where initial crises spur trust in governmental action. As people grew accustomed to the pandemic, trust waned, and frustration with changing policies such as lockdowns and slow vaccine rollouts (see Figure 1). Alarmingly, by the second year of the pandemic, less than 50% of respondents trusted the government’s health information. This has serious implications for the effectiveness of public health campaigns, as trust is critical for public engagement with preventive measures.

Figure 1. Average levels of trust in Covid-19-related information by the federal government in Germany, December 2020 to September 2021. Note: The figure shows the mean level of trust by wave with 95% confidence intervals in brackets. License: CC BY 4.0.

We found that older individuals and those more afraid of contracting Covid-19 tended to trust government information more. However, merely being part of a vulnerable group due to health conditions did not always translate into higher trust levels, pointing to the complexity of risk perception (see Figure 2). The study also highlights how political ideology influenced trust levels. Supporters of the right-wing populist AfD party exhibited the lowest levels of trust, reflecting a broader trend where populist voters, often skeptical of political elites, extend this distrust to health information.

Figure 2. Models of trust in Covid-19-related information by the government. Note: The figure shows coefficient estimates with 95% confidence intervals for two multilevel linear models with random intercepts for participants. The moderation model includes an interaction term between vote choice and fear of infection. Reference categories: Gender = Male, Age = 35-64 years, Education = Low (below pre-university education), Risk group member = No, Vote choice = Established parties. Wave is coded from (1) December 2020 to (4) September 2021; household income is coded from (1) under EUR 500 to (12) EUR 10,000 or more; fear of infection is coded from (1) not afraid at all to (4) very afraid. License: CC BY 4.0.

Finally, fear of infection had a stronger impact on trust in governmental communication among AfD supporters compared to voters of established parties (see Figure 3). While voters of established parties generally trusted government action regardless of personal fear, AfD supporters who feared the virus were more likely to rely on government information. We suggest that AfD supporters with greater fear of the virus may have been more likely to turn to government information, as their typical media sources might not have offered adequate guidance on managing risks.

Figure 3. Adjusted predictions for voters of different parties with different levels of fear
of infection. Note: The figure shows predicted values for trust in information from the government by different levels of fear of infection for voters of different parties, with 95% confidence intervals. License: CC BY 4.0.

We further discuss the implications of these findings and make recommendations at the end of the paper.

How to cite

Joly, P., Schieferdecker, D., & Berger, N. (2024). Trust in Governmental Health Communication on Covid-19: Does Vulnerability Moderate the Effect of Partisanship? European Journal of Health Communication, 5(3), 19-32. https://doi.org/10.47368/ejhc.2024.302

Major changes in btmembers, an R package to import data on all members of the Bundestag since 1949

With the upcoming German federal elections, I decided to make important changes to btmembers, my R package to import data on all members of the Bundestag since 1949.

Current composition of the German Bundestag

You can find more information about btmembers here. The CSV data is available here and the codebook here.

Version 0.1.0 changes the default behavior of the function import_members().

  • By default, import_members() now returns a list containing four data frames (namen, bio, wp, and inst), which together preserve all the information contained in the XML file provided by the Bundestag.
  • If import_members() is called with the argument condensed_df = TRUE, the function will return a condensed data frame. Each row corresponds to a member-term. Most of the information contained in the original data is preserved except only the most recent name of the member is retained and institutions are removed. A new column named fraktion is added to the data. fraktion is a recoded variable and refers to the faction the member spent most time in during a given parliamentary term.
  • The performance of import_members() has been improved by the integration of tidyr unnest functions.
  • The package does not come preloaded with the data anymore but uses GitHub to store the pre-processed data. This facilitates updates and will make the integration of GitHub Actions possible in the future.
  • update_available() has been deprecated.

These changes give users the possibility to reorganize the data as they wish and make the package faster and more robust.

Plot the mean and confidence interval of a variable across multiple groups using Stata

Stata offers many options to graph certain statistics (e.g. dot charts). These options, however, do not always work well to compare statistics between groups. To address this, I am sharing a program called plotmean, which allows users to graph the mean and confidence interval of a variable across multiple groups.

Running this do-file will generate the following graph:

The program relies on the statsby function and can be easily modified to plot all sorts of statistics.

New preprint: “Transition Spillovers? The Protest Behaviour of the 1989 Generation in Europe”

My last paper entitled “Transition Spillovers? The Protest Behaviour of the 1989 Generation in Europe” is now available on SocArXiv.

The paper is available here and the replication materials are available here.

This paper re-examines the well-documented gap of political participation between citizens of Western Europe and citizens of Central and Eastern Europe. Building on political socialization theory, it explores whether the deficit of participation in post-communist countries is moderated by previous experiences of mobilization. The study focuses on the protest behaviour of the 1989 generation, which is composed of citizens who reached political maturity during the collapse of communism.

RQDA: How an open source alternative to ATLAS.ti, MAXQDA, and NVivo opens new possibilities for qualitative data analysis

Coding CVs of members of the Bundestag using RQDA

At the WZB, as part of a project on the AfD (the new radical right party in Germany), I recently had to analyze CVs of Members of the Bundestag. The idea was to automatically download the MPs’ profiles from the Bundestag website using web scraping techniques and then to describe the social structure of the AfD faction using quantitative and qualitative methods. I was particularly interested in the prior political experience of the AfD representatives. Extracting this type of information automatically is difficult and I opted to code some of the material manually.

I was looking for a tool to work with the data collected. In social sciences, ATLAS.ti, MAXQDA, and NVivo are the most commonly used programs to analyze qualitative data. Yet, these programs are expensive and not everyone is able to afford a license. Also, I simply did not need all the bells and whistles offered by these tools (I suspect I am not the only one in this situation).

The essence of qualitative data analysis (QDA) is to annotate text using relevant codes. Think of a computer-assisted qualitative data analysis software (CAQDAS) as a super highlighter (your brain is still doing the hard stuff). The rest – extracting content from PDF files, combining codes, visualizing them, etc. – can be performed by other programs. These functions do not have to be necessarily bundled with QDA programs.

I took a look at RQDA, a free, light, and open-source CAQDAS built on top of R. RQDA was designed by Ronggui Huang from Fudan University, Shanghai, and has been used in a number of publications. The package is still in development (current version 0.3-1) and some bugs are apparent. Yet, RQDA does the essential right. It allows you to import text files (in many languages), code them using a graphical interface, and store your codes (and their meta-information) in a usable format.

What I find particularly exciting about RQDA is that it lets you use the powerful machinery of R. Compared to other programs that work with closed software environments, RQDA is highly expandable. Think of all the R packages available to manipulate, analyze, and visualize textual data. Combining qualitative and quantitative data is also really easy, which makes RQDA a very good tool for mixed methods.

Most importantly, since RQDA is free and open-source, anyone with an Internet access can download R and RQDA and reanalyze coded texts. Sometimes qualitative data contains sensitive information and it is not advisable to share it. Yet, often, scholars analyze data that is already public (as I do). In this case, it might be interesting to put your coding schemes online.

Researchers usually agree that quantitative methods should be reproducible. This means that it should be possible to reproduce the findings of a publication (tables and figures) by re-running code on the raw data. I argue that qualitative research, when it does not use sensitive data, should be traceable, in the sense that others should have the possibility to go back to the source texts, examine the context, and reinterpret the coding. Simply by being free and open source, RQDA facilitates the diffusion and reuse of qualitative material and makes qualitative research more traceable.

There are good RQDA tutorials online, especially the Youtube series prepared by Metin Caliskan, in French and English (see also, Chandra and Shang 2017; Estrada 2017). I learned a lot from these demonstrations and made good progress with the coding of the CVs of the members of the Bundestag. I am really satisfied with RQDA and, for the moment, do not feel the need to move to proprietary software.

New vignette: How to apply Oesch’s class schema on data from the European Social Survey (ESS) using R

Daniel Oesch (University of Lausanne) has developped a schema of social classes, which he discusses and applies in different publications. On his personnal website, he offers Stata, SPSS, and SAS scripts to generate the class schema with data from different surveys.

Scholars working with other programs (especially R) might be interested in using Oesch’s class schema as well. In this vignette, I show how to apply Oesch’s class schema on data from the European Social Survey (ESS) using R. See:

How to apply Oesch’s class schema on data from the European Social Survey (ESS) using R

Workshop on open access

As part of the Berlin Summer School in Social Sciences, I organized a workshop entitled “Open Access: Background and Tools for Early Career Researchers in Social Sciences.” The goal of the workshop was to introduce participants to open access publishing and present useful tools to make their publications available to a wider audience.

We addressed questions such as:

  • What are the limitations of the closed publication system?
  • What is OA publishing?
  • What are the different types of OA publications?
  • What are the available licenses for OA publications?
  • What is the share of OA publications in the scientific literature and how is this changing over time?
  • What sort of funding is available for OA publishing?

The workshop was structured around a 45 minute presentation punctuated by group discussions and exercises. 90 minutes were planned for the whole workshop.

This workshop was prepared as part of the Freies Wissen Fellowship sponsored by Wikimedia Deutschland, the Stifterverband, and the VolkswagenStiftung.

The contents of the workshop are under a CC BY 4.0 license. All the material of this workshop (the outline, the slides, and the bibliography) can be cloned or downloaded from GitHub.

Feel free to share and remix the material to create your own workshop.

Vignette: Identifying East and West Germans in the European Social Survey using R

As a complement to my recent paper on “Generations and Protest in Eastern Germany,” I have prepared a vignette explaining how to identify East and West Germans in the European Social Survey (ESS), while accounting for east-west migration in Germany. The categorization follows a political socialization approach. You can find the vignette here:

Identifying East and West Germans in the European Social Survey: A demonstration in R

The vignette was writen in R markdown and the original script is available on my GitHub page.

New preprint on “Generations and Protest in Eastern Germany”

My WZB Discussion Paper entitled “Generations and Protest in Eastern Germany: Between Revolution and Apathy” is now available on SocArXiv.

The paper is available here and the replication material here.

This paper compares the protest behavior of East and West Germans across generations and over time. It concludes that East Germans, especially those who grew up during the Cold War, participate less in protest activities than West Germans from the same generation after controlling for other individual characteristics.

Dear editor, what is your preprint policy?

Going through a peer-review process usually takes months if not years. At the end, if a paper makes it to publication, access will often be limited by publishers who impose a paywall on peer-reviewed articles.

Preprints allow authors to publish early research findings and to make them available to the entire world for free. The concept is simple: 1) you upload a paper to a public repository; 2) the paper goes through a moderation process that assesses the scientific character of the work; 3) the paper is made available online. These three steps are usually completed in a few hours. With preprints, authors can rapidly communicate valuable results and engage with a broader community of scholars.

Researchers are sometimes reluctant to publish their work as preprints for two reasons. First, they fear that their papers won’t be accepted by scholarly journals because preprints would violate the so-called Ingelfinger rule, i.e. their work would have been “published” before submission. Most journals however will accept to review and publish papers that are available as preprints. The SHERPA/RoMEO database catalogues journal policies regarding pre and postprints (accepted papers that incorporate reviewers’ comments). The vast majority of journals are listed as “yellow” or “green” in the database: they tolerate preprints (yellow) or pre and postprints (green).

Second, authors are worried that they might get scooped, that their work might be stolen by someone else who would get credits for their work. Yet, the experience of arXiv, the oldest preprint repository which publishes papers in mathematics, physics, and others, shows the exact opposite. Since its creation in 1991, the repository has helped prevent scooping by offering scholars the chance to put a publicly available timestamp on their work.[1]

My first preprint

I have decided to make a paper available as preprint in the coming days. My idea is to simultaneously submit the paper to a peer-reviewed journal and upload it on a public repository. I first shared the concerns of many of my colleagues regarding scooping and the possible rejection of my work by editors. However, the more I learned about preprints, the more I felt confident that this was the right way to proceed.

Here is what I did. I first selected the journal to which I would like to submit my paper. I then checked how the journal was rated in SHERPA/RoMEO. It turned out to be a “yellow” journal: so far so good. Finally, to be absolutely sure that the preprint would not be a problem I contacted the editor of the journal and asked:

Dear Professor XX,

I’m interested in publishing in journal YY. I would like to ask: what is your preprint policy? Would you review a paper that has been uploaded to a public repository like SocArXiv?

Best wishes,

Philippe Joly

And the response came a few minutes after:

Dear Mr Joly,

Yes, we would have no problem with that.

Best,

Professor XX

I now feel very comfortable uploading the preprint on a repository. I will try to store my paper on SocArxiv, which is one of the first online preprint servers for the social sciences. While economists have a long experience with publicly available working papers, sociologists and political scientists have been more reluctant to join the movement. SocArXiv has been active since 2017 and is modelled on arXiv. Interestingly, the team at SocArXiv has partnered with the Center for Open Science and their preprint service is hosted by the Open Science Framework, which I have covered in another post.

  1. Bourne PE, Polka JK, Vale RD, Kiley R (2017) Ten simple rules to consider regarding preprint submission. PLoS Comput Biol 13(5): e1005473.