Vignette: Improving the harmonization of repeated cross-national surveys with the R mice package

As part of the Freies Wissen Fellowship, I have prepared a vignette on:

Using multiple imputation to improve the harmonization of repeated cross-national surveys

The vignette introduces a technique for coping with the problem of systematically missing data in longitudinal social and political surveys.

The project combines many aspects of open science:

  • The multiple imputation procedure was implemented in R, a programming language and free software environment for statistical computing.
  • The text and the code of the vignette were prepared in R markdown, a form of literate programming. Literate programming makes research more transparent and reproducible. It is also a nice tool for the development of open educational resources (OER).
  • Finally, all the material of the vignette is on my GitHub page, where users can clone the repository, comment, and suggest modifications.

Some reflections on the availability, reliability, and replicability of protest data

While theories and methods to study protest are becoming ever more sophisticated, protest data still suffers from many of the same limitations as it did twenty years ago. Since the 1990s, new rich datasets on extra-electoral political participation have become available. Yet, despite the great contribution of the research teams behind the production of this empirical material, concerns remain with regards to the availability, reliability, and replicability of this data.

Scholars interested in comparing protest participation quantitatively in different countries are essentially dependent on two types of data: international social surveys and protest event data from newspapers or websites.[1] These sources are all suboptimal, each having its advantages and disadvantages.

International social surveys

Most major international social surveys, like the European Values Study, the World Values Survey, the International Social Survey Programme, and the European Social Survey now incorporate questions relative to protest participation.

These social surveys have three great advantages: they are usually nationally-representative, their questionnaires are standardized, and, since their units of analysis are individuals, they include a bunch of variables (e.g. age, gender, and education) that can be incorporated as covariates into statistical analyses.

But, this type of survey has one downsize: it does not allow to link respondents to specific protest events. We can never say, individual X took part in demonstration Y. Typically, respondents will have to answer to questions such as “during the last 12 months, have you… taken part in a lawful demonstration?[2] The target, timing, and location of the political action remains unknown. We can identify who is active and who is not in a given country at a certain point in time, but we cannot, for example, compare mobilization for the environmental movement with that for the feminist movement.

These surveys suffer from another problem: their inconsistent coverage across countries and time. Typically, as with the EVS and WVS, survey rounds are conducted approximately every five years with different samples of countries. OECD countries are systematically over-represented. This makes it difficult to carry any form of time series cross-sectional analysis.

Finally, survey data is not fully open. Survey organizations typically reserve the right to distribute their data. Although this is often done for very legitimate reasons (updates, correction of mistakes), it can complicate the diffusion of secondary data such as protest indexes.

Protest event data

The second main type of protest data is based on secondary sources (newspapers, websites, police records) which report events such as demonstrations, strikes, or rallies. Here, the units of analyses are protest events rather than individuals. Up to recently, this data was coded entirely manually – a very tedious process. Now, more and more automated coding tools are becoming available, but often automation has come at the cost of reliability.

In many aspects, the balance sheet of protest event data, in terms of advantages and disadvantages, is the exact mirror of the one for survey data. Protest event data can be very useful for a few reasons. First, this type of data typically incorporates information on events’ form, size (number of participants), location, duration, target, and other characteristics such as the level of confrontation with the police. Second, since protest event data is usually based on daily reports, it can be easily reassembled into monthly or annual measures, making it straightforward to perform longitudinal analyses. Third, with access to archives, protest event data can theoretically go back in time, even as far as the beginning of the written press (as did Charles Tilly and his collaborators).[3]

Nevertheless, collecting protest data from secondary sources comes with a warning notice. Social movement scholars are well aware that protest event data is subject to two forms of biases.[4] Selection bias is present when media sources report on certain types of protest, but not others. This can reflect the ideological orientation or geographical focus of the source. In fact, what protest event data is really measuring is media attention, not absolute levels of protest. Description bias appears when protest events are reported incorrectly. At the age of “alternative facts,” the number of participants in a demonstration is notoriously difficult to estimate and figures can be quite inconsistent across different sources.

One final problem with protest event data is that the original sources behind the datasets are usually under copyright. If other scholars wish to revise some codings, they have to obtain the authorization to retrieve the original sources, usually newspaper articles (and that is when the dataset includes clear references, which was not always the case).

Machine-coded protest event data

Machine-coded protest event data has all the limitations of human-coded data, but, of course, is generated much more efficiently. Reliability is potentially an issue however, as automated coding mechanism are prone to false negatives and false positives. And, of course, machines do not have the finest interpretation when it comes to identifying the characteristics of the protest event. For example, the reliability and validity of the Global Database of Events, Language, and Tone (GDELT) project, one of the most ambitious attempts at automatically cataloging societal events, has been seriously questioned.[5]

Opening, benchmarking, and triangulating

Where should we go from there? There are no perfect solutions, but certainly a few incremental changes would improve the quality and transparency of protest data. The strategy I propose would follow three lines: opening, benchmarking, and triangulating.

Opening

We need to make sure that all the data and the sources on which the data is based are open. For social surveys, this would mean facilitating the access to data with APIs. For protest event data, this would imply that the original sources are open and accessible. Some newspapers like the New York Times and the Guardian have already taken an important step by implementing APIs to easily retrieve their articles. We would expect public broadcasters to go a step further and place their articles (at least the older ones) under a Creative Commons license. Governments also own protest data, for example in the form of police records, that could be made public and machine-readable.

Benchmarking

By benchmarking, I mean comparing systematically the same type of data from different sources (e.g. some survey data compared to other survey data) and ideally developing measurement models to assess uncertainty of the data. A good example of how this could work is Alex Hanna’s Machine-learning protest event data system (MPEDS) where human-coded datasets are used to “train” machine-learning algorithms which classify and code protest events on the basis of large databases of newspaper articles.

Triangulating

Finally, triangulating would mean combining different types of protest data (from surveys, newspapers, and web sources) and cross-validate the measures. For example, we could think of a research design where protest event data is collected first and then, a nationally-representative survey, complements the analysis. We could use the protest event data to identify a list of the most prominent episodes of mobilization in a country and the survey data to get a clearer profile of the protesters (age, gender, education) and reassess the number of participants.

The three strategies of opening, benchmarking, and triangulating would certainly improve the transparency and robustness of research on extra-electoral political participation and social movements.

Notes and references

  1. A third option would be expert-surveys such as the V-Dem civil society index. Yet, these measures are usually better at capturing the conditions under which protest takes place (the opportunity structure), rather than the actual level or orientation of the mobilization. See: Michael Bernhard et al., “Making Embedded Knowledge Transparent: How the V-Dem Dataset Opens New Vistas in Civil Society Research,” Perspectives on Politics 15, no. 2 (June 2017).
  2. European Social Survey, ESS Round 8 Source Questionnaire (London: ESS ERIC Headquarters c/o City University London, 2016).
  3. Charles Tilly, Louise Tilly and Richard Tilly, The Rebellious Century, 1830-1930 (Cambridge: Harvard University Press, 1975).
  4. Jennifer Earl et al., “The Use of Newspaper Data in the Study of Collective Action,” Annual Review of Sociology 30 (2004).
  5. Wang et al., “Growing pains for global monitoring of societal events,” Science 353, no. 6307 (September 2016); Alex Hanna, “Assessing GDELT with handcoded protest data,” Bad Hessian, http://badhessian.org/2014/02/assessing-gdelt-with-handcoded-protest-data/

A look at the Open Science Framework (OSF)

Over the recent years, a number of tools have been proposed to help researchers make their research more accessible, from the first stages of project design to publication. While this development is welcome, the abundance of software solutions can sometimes feel overwhelming.

OSF: one tool across the entire research lifecycle

This week I have started experimenting with the Open Science Framework (OSF), a free and open source platform which aims at streamlining the research workflow in a single software environment. The OSF was developed by the Center for Open Science. It provides solutions for collaborative, open research throughout the research cycle.

After testing the platform for a few days, I am impressed by its flexibility and reliability together with its successful integration with other tools.

One of the great strengths of the OSF is that it gives users full control over what content is public and what content is private. By default, all projects are created as private (only the administrators and collaborators can see them), but as project evolves users can decide to make certain elements public. This transition from private to public is facilitated by the way the OSF organizes research material. Users develop “projects” which are divided into “components” (e.g. literature, data, analysis, communication), which can be further subdivided into “components” themselves, etc. This structure is highly flexible and allows researchers to use the OSF for almost any type of project (an article, an experiment, a class). Each project or component receives a globally unique identifier (GUID) and public projects can be attributed a DOI. Users define which license applies to which content.

This “OSF 101” video gives an overview of the many functions of the platform.

Video: “OSF 101”

My first OSF project

I decided to test the OSF by creating a new page which will group projects I am developing as part of the Freies Wissen Fellowship Program.

Figure: Project in public mode

In the public part of the project (see picture above), two components are currently available: “Literature” and “Data management plan.” The Literature component is a first attempt at making my bibliography open. There you will find a first bibTeX file with a list of references on “Political protest and political opportunity structures.” The Data management plan is a copy of the plan I posted on wikiversity. This version is written in markdown and can be modified and commented directly on the OSF platform.

Figure: Project in private mode

Of course, these two components are just a start. I plan on gradually adding more public content on my OSF project. In the private view of my project (picture above), you can see that I have linked three other projects: the two papers I am working on and the seminar on open social sciences.

Figure: Paper project

The paper projects are still in private mode, but, as shown in the picture above, they already contain a title, an abstract, tags, a manuscript, and a replication set. This already offers a good backup solution. There is currently no limit on the OSF storage. Only, individual files cannot have more than 5 GB. Files can be viewed, downloaded, deleted, and renamed. Plain text files can be edited collaboratively online. The OSF also has a multitude of well-integrated add-ons such as Dropbox and GitHub.

Overall, I am really satisfied with the OSF and will continue to explore it as I develop my projects further. If you want to learn more about the possibilities offered by the OSF, I recommend the John Hopkins’ Electronic Lab Notebook Template.

Creating a data management plan

As part of the Freies Wissen Fellowship Program, I was recently encouraged to prepare a data management plan (DMP) for my project. This was something new to me. In my study program (but also, I suspect, in most social science programs), issues related to data collection or generation, data analysis, data storing, and data sharing had usually been discussed on an ad hoc basis with supervisors or in small research teams.

Preparing an exhaustive DMP can be a tedious process at an early stage of a project, but it has obvious benefits down the road.

What is a data management plan?

A data management plan is a “written document that describes the data you expect to acquire or generate during the course of a research project, how you will manage, describe, analyze, and store those data, and what mechanisms you will use at the end of your project to share and preserve your data.” [1]

DMPs can vary in terms of format, but they typically aim at answering questions such as:

  • What type of data will be analyzed or created?
  • What file format will be used?
  • How will the data be stored and backed-up during the project?
  • How will sensitive data be handled?
  • How will the data be shared and archived after the project?
  • Who will be responsible for managing the data?

A good DMP should be a “living document” in the sense that it needs to evolve with the research project.

Why do you need a data management plan?

A DMP forces you to anticipate and address issues of data management that will arise in the course of the research project. There are a few reasons why such an approach can be beneficial, as highlighted by the University of Lausanne:

  • Taking a systematic and rigorous approach to data management saves time and money.
  • Developing a good backup strategy can avoid tragedies such as the loss of irrecoverable data.
  • More and more funders are asking for DMPs.
  • A DMP is a first logical step when planning to open your data to the public.
  • Generally, DMPs are an integral part of honest, responsible, and transparent research. [2]

A good example of the new direction public funders are taking with regards to DMPs is the EU Horizon 2020 Program. As part of the Open Research Data Pilot, the EU is now encouraging applicants to present a DMP where they detail how their research data will be findable, accessible, interoperable, and reusable (FAIR). [3]

Creating my own data management plan

For my project, I followed the DMP model proposed by the Bielefeld University. This concise model summarizes in 8 sections the essential elements of a DMP. You can find my plan here.

Challenges

I thought the exercise would be rather straightforward as my dissertation relies mostly on existing data. My contribution lies mostly in the merging and harmonization of repeated cross-national surveys together with national-level indicators.

I nonetheless faced some challenges. The first was that not all data providers state precisely what their terms of use are. Very few use an explicit license. Most of them will have conditions similar to the World values survey: “The data are available without restrictions, provided that 1) they are used for non-profit purposes, 2) data files are not redistributed, 3) correct citations are provided wherever results based on the data are published.”[4]

The second challenge I faced was to elaborate a backup strategy. One advantage of DMPs is that they quickly highlight your weaknesses. In my case, it was clear that I had not reflected enough on how to safely save and archive the thousands of lines of code I had produced over years. (Code is also data in a broad sense!) That’s why I am exploring new avenues to protect my files and soon distribute my replication material.

I am currently exploring solutions such as the Open Science Framework platform. OSF seems a good option as it is free, open source, and well-integrated with other platforms such as Github and Dropbox. I will report on my progress in upcoming posts.

References

  1. Stanford Libraries. “Data management plans.” https://library.stanford.edu/research/data-management-services/data-management-plans (accessed November 13, 2017).
  2. Université de Lausanne. “Réaliser un Data Management Plan.” https://uniris.unil.ch/researchdata/sujet/realiser-un-data-management-plan/ (accessed November 13, 2017).
  3. European Commission. Directorate-General for Research & Innovation. “H2020 Programme: Guidelines on FAIR Data Management in Horizon 2020.” http://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf (accessed November 13, 2017).
  4. World Values Survey. “Integrated EVS/WVS 1981-2008 Conditions of Use.” http://www.worldvaluessurvey.org/WVSContents.jsp (accessed November 13, 2017).

New blog

Several factors are pushing for the implementation of open science ideas and principles in social science. First, social science tends to rely more and more on large datasets assembling hundreds of thousands, if not millions, of observations. This has raised questions about the transparency and reproducibility of data collection processes. Second, social scientists are using ever more sophisticated statistical techniques to analyze their data. As these methods necessitate both specialized knowledge and large computing infrastructure, clear communication of empirical strategies, using open-source software, becomes increasingly important. Finally, social science also had infamous cases of scientific fraud, that could have been avoided if authors had made their data and methodology public (see LaCour and Green (2014), “When contact changes minds,” Science).

For all these reasons, I have felt the necessity, as a political scientist, to engage more with notions of open access, open data, open source, and open methodology.

Following my nomination for a Fellowship “Freies Wissen,” I have decided to use this website as a platform for discussing open science ideas in sociology and political science. In the coming months, I will also present very concretely the challenges and opportunities I encounter when communicating my own work in an open science framework.