Emergent Mind

Abstract

The importance of big data is a contested topic among social scientists. Proponents claim it will fuel a research revolution, but skeptics challenge it as unreliably measured and decontextualized, with limited utility for accurately answering social science research questions. We argue that social scientists need effective tools to quantify big data's measurement error and expand the contextual information associated with it. Standard research efforts in many fields already pursue these goals through data augmentation, the systematic assessment of measurement against known quantities and expansion of extant data by adding new information. Traditionally, these tasks are accomplished using trained research assistants or specialized algorithms. However, such approaches may not be scalable to big data or appease its skeptics. We consider a third alternative that may increase the validity and value of big data: data augmentation with online crowdsourcing. We present three empirical cases to illustrate the strengths and limits of crowdsourcing for academic research, with a particular eye to how they can be applied to data augmentation tasks that will accelerate acceptance of big data among social scientists. The cases use Amazon Mechanical Turk to 1. verify automated coding of the academic discipline of dissertation committee members, 2. link online product pages to a book database, and 3. gather data on mental health resources at colleges. In light of these cases, we consider the costs and benefits of augmenting big data with crowdsourcing marketplaces and provide guidelines on best practices. We also offer a standardized reporting template that will enhance reproducibility.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.