Crowdsourcing helped to defuse the 2007 election violence in Kenya and it also helped emergency responders more efficiently target their relief efforts in the aftermath of the 2010 Haiti earthquake. Practitioners are also staking a great deal on crowdsourcing as a means to close the development loop. However, while crowdsourcing has proven effective in emergency settings, little is known about whether and to what extent citizens will be motivated to provide feedback under normal conditions.
In 2011, we worked with UNICEF Uganda to design and implement two randomized controlled trials (RCTs) to systematically test the motivational factors behind two distinct aspects of crowdsourcing: 1) recruiting citizen monitors (Ureporters) to provide feedback; and 2) procuring responses from these monitors. UNICEF Uganda’s innovative Ureportsystem aggregates RapidSMS text messages from citizen monitors in Uganda and has the potential to dramatically increase the amount of information available on development projects and outcomes throughout the country. Ureport sends out a weekly poll via text message to subscribed users on local development projects. In the first experiment, we ran a series of radio ads in one set of Ugandan districts promoting the usage of Ureport as a way to improve the community (an intrinsic reward), and another set of ads that had the same appeal to better the community but also explained that users of Ureport would be enrolled in a lottery to win a solar charger (an extrinsic reward). A third set of control districts received no ads. We found that while the radio ads urging participation to better the community increased Ureport usage, the districts with the radio lottery ad saw significantly more participation.
And, contrary to important experimental results from psychology, we found little evidence that material incentives dampened Ureporters' intrinsic motivation to make a difference in their communities. Instead, in both experiments, the lottery for solar chargers caused a significant increase in citizens’ likelihood of replying to any query from UNICEF and in the total number of responses received. Direct human feedback on texts from researchers, interaction in thematic networks with other participants, and appeals to “speak out on what’s happening in your community” also significantly increased participation over the control condition where no intervention took place. These findings shed new light on a potential solution to one of most fundamental problems in development: the broken feedback loop between donors and their intended beneficiaries. Over the next 12 months, we hope to build on these findings with an additional study designed to further assess the feasibility of connecting donor and government officials with the intended beneficiaries of public expenditure. In particular, we hope to gain additional insight into the factors that motivate citizen monitors to give reliable and actionable feedback and how to collect, aggregate, curate and transmit this feedback to decision-makers in ways that trigger corrective action.