This is a guest post by Simon Munzert, PhD student at the University of Konstanz, who is currently on a visit at the Lab.
It’s not that the people here at Duke’s Department of Political Science—and the WardLab members in particular—risk to run out of hot data in the near future. As somebody who is primarily concerned with research on public opinion and election forecasting, I was stunned in view of the masses of high quality event data and its potential for so many applications. Still, during my short stay at the Lab as a visiting scholar I had the opportunity to give a little introduction to various web scraping techniques using R.
Why web scraping? We have observed that the rapid growth of the World Wide Web over the past two decades tremendously changed the way we share, collect and publish data. Firms, public institutions and private users provide every imaginable type of information and new channels of communication generate vast amounts of data on human behavior. As many data on the Web are products of social interaction, they are of immediate interest for us as social scientists. Over the past years research on computer-based methods for classification and analysis of existing large amounts of data is booming across all disciplines, and political scientists contribute heavily to this process.
or, How I learned to stop worrying and love event data.
Nobody in their right mind would think that the chances of civil war in Denmark and Mauritania are the same. One is a well-established democracy with a GDP of $38,000 per person and which ranks in the top 10 by Human Development Index (HDI), while the other is a fledgling republic in which the current President gained power through a military coup, with a GDP of $2,000 per person and near the bottom of the HDI rankings. A lot of existing models of civil war do a good job at separating such countries on the basis of structural factors like those in this example: regime type, wealth, ethnic diversity, military spending. Ditto for similar structural models of other expressions of political conflict, like coups and insurgencies. What they fail to do well is to predict the timing of civil wars, insurgencies, etc. in places like Mauritania that we know are at risk because of their structural characteristics. And this gets worse as you leave the conventional country-year paradigm and try to predict over shorter time periods.
The reason for this is obvious when you consider the underlying variance structure. First, to predict something that changes, say dissident-government conflict, the nature of relationships between political parties, or political conflict, you need predictors that change.
Predictions for regime change in Thailand from a model based on reports of government-dissident interactions (top). White noise, with intrinsically high variance, is not helpful (middle), but neither is GDP per capita (bottom).
During a compelling class on criminal organizations taught by Guillermo Trejo, at Duke University, I was struck by the complex consequences of criminal–and political–violence on civilian life. At the same time, I was enrolled in a course on social networks with Jim Moody, a wonderfully talented sociologist who convincingly situates network dynamics at the center of the human experience. By the end of the semester I was left with the question: how do networks moderate the effects of violence on civilian life? This question eventually led me to co-organize a national survey in Mexico in July 2012, with my colleague Sandra Ley Gutierrez, focusing on the consequences of criminal victimization. In this survey, I collected original data on 1,000 kinship networks as a way to capture social networks at the individual level.
Studies on victimization have repeatedly reported that victimization is associated with an increase in political participation, but we don’t really understand why. I find that for self-identified victims, kinship connectiveness increases probability of participation in political party meetings by 5%, all else constant (when the other covariate values from my model are set at their mean or median). The size of this result is consistent with other studies on political participation which typically find effects under the 10% range. These predicted probabilities, of course, are contingent on the selected covariate values. Thus, let’s also review specific “real world” scenarios.
Mining Texts to Generate Fuzzy Measures of Political Regime Type at Low Cost. Reposted from Dart Throwing Chimp, by Jay Ulfelder.
Political scientists use the term “regime type” to refer to the formal and informal structure of a country’s government. Of course, “government” entails a lot of things, so discussions of regime type focus more specifically on how rulers are selected and how their authority is organized and exercised. The chief distinction in contemporary work on regime type is between democracies and non-democracies, but there’s some really good work on variations of non-democracy as well (see here and here, for example).
Unfortunately, measuring regime type is hard, and conventional measures of regime type suffer from one or two crucial drawbacks.
This post was written by Jay Ulfelder and originally appeared on Dart-Throwing Chimp. The work it describes is part of the NSF-funded MADCOW project to automate the coding of common political science datasets.
Guess what? Text mining isn’t push-button, data-making magic, either. As Phil Schrodt likes to say, there is no Data Fairy.
I’m quickly learning this point from my first real foray into text mining. Under a grant from the National Science Foundation, I’m working with Phil Schrodt and Mike Ward to use these techniques to develop new measures of several things, including national political regime type.
I wish I could say that I’m doing the programming for this task, but I’m not there yet. For the regime-data project, the heavy lifting is being done by Shahryar Minhas, a sharp and able Ph.D. student in political science at Duke University, where Mike leads the WardLab. Shahryar and I are scheduled to present preliminary results from this project at the upcoming Annual Meeting of the American Political Science Association in Washington, DC (see here for details).
When we started work on the project, I imagined a relatively simple and mostly automatic process running from location and ingestion of the relevant texts to data extraction, model training, and, finally, data production. Now that we’re actually doing it, though, I’m finding that, as always, the devil is in the details. Here are just a few of the difficulties and decision points we’ve had to confront so far.
Improvised explosive devices, or IEDs, were extensively used during the US wars in Iraq and Afghanistan, causing half of all US and coalition casualties despite increasingly sophisticated countermeasures. Although both of these wars have come to a close, it is unlikely that the threat of IEDs will disappear. If anything, their success implies that US and European forces are more likely to face them in similar future conflicts. As a result there is value in understanding the process by which they are employed, and being able to predict where and when they will be used. This is a goal we have been working on for some time now as part of a project funded by the Office of Naval Research, using SIGACT event data on IEDs and other forms of violence in Afghanistan.
Explosive hazards, which include IEDs, for our SIGACT data.