Do subnational data improve resource allocation and service delivery?

We synthesized insights from 31 studies that measure the effects of giving location-specific information to public officials and citizens.

April 9, 2019
Bradley C. Parks, Ariel BenYishay
Cover image from the synthesis report, Can Providing Local Data on Aid and Population Needs Improve Development Decision-Making?

Cover image from the synthesis report, Can Providing Local Data on Aid and Population Needs Improve Development Decision-Making? Image by AidData.

When public officials are presented with a map that shows the amount of development expenditure received by specific towns and villages, does it change how they allocate new resources? What if they also learn which communities have higher and lower levels of need?  If citizens can compare how the performance of their local health clinic compares to those in neighboring jurisdictions, are they more likely to place pressure on their local service providers?  These are all questions about whether and when the use of location-specific information influences the decisions and actions of public officials and citizens.

Some cases provide grounds for optimism. In Kenya, after a randomized control trial (RCT) demonstrated that a particular reading and math initiative was a cost effective way to improve student learning outcomes, the Ministry of Education scaled up the program to schools in 47 counties, and it used an online dashboard with subnational data to target scarce resources.  

However, there also reasons to question whether subnational data will improve resource allocation efficiency and service delivery outcomes. Citizens might not know how to interpret or act upon such data, or they might be reluctant to advocate for change in the absence of strong accountability mechanisms. Likewise, public officials might have difficulty finding the “signal in the noise” or weak incentives to carefully consider new sources of evidence.

Subnational data are rapidly increasingly in scale, scope, precision, periodicity, and accessibility. But are these data consequential? If so, how exactly do they improve resource allocation and service delivery outcomes?   

To answer these questions, we reviewed thirty-one RCTs that took place in a wide variety of countries and sectors and tested the efficacy of different types of informational interventions. The common denominator across all of the field experiments is that they measured the resource allocation and service delivery impacts of granting citizens and public servants access to location-specific data. We summarize the key insights and lessons from this body of work in a new synthesis report entitled Can Providing Local Data on Aid and Population Needs Improve Development Decision-Making? A Review of Recent Experimental Evidence.

In a nutshell, here’s what we found: the availability of location-specific data can improve resource allocation and service delivery outcomes when it is relevant, understandable, actionable, and incentive-compatible. When citizens have access to accountability mechanisms, it is also more likely that such data will be put to effective use.  

The importance of helping decision-makers make “aid-to-need” comparisons

More specifically, we found that information about the distribution of development expenditure across localities can help public officials make more efficient resource allocation decisions, but they also need data about the socio-economic needs of populations in those same localities. With access to data on both resources and needs, public officials can more accurately estimate the “aid-to-need” ratio in specific communities, which is crucial for the identification of communities that are overfunded or underfunded relative to their levels of need.

In Malawi, a group of researchers tested the effect of providing local politicians with maps of schools that had already received international aid. They found that those politicians who received the maps were 25% less likely than those who did not receive the maps to provide additional aid—in the form of iron roofing sheets, teacher supply kits, or solar lamps—to the same schools, which suggests that public officials do respond to new data about the geographic distribution of aid. However, in the absence of information about the needs of these schools, it's hard to know if local officials efficiently allocated the scarce resources at their disposal..

One way to address this challenge is through interventions that reduce the marginal cost of acquiring reliable data on local population needs. In Ethiopia, a group of researchers studied whether receiving a government circular with official administrative data on service delivery outcomes (like school enrollments and antenatal care) from the Ministry of Public Service and Human Development would improve the accuracy of district officials’ knowledge about conditions within their own localities.  At baseline (i.e. prior to the provision of the government circular), the researchers found that district officials had highly inaccurate assumptions about the populations they served—for example, nearly half of the study participants reported a district population size that diverged from the latest census figure by more than 50%. However, they also found substantially lower error rates among those district officials who received the government circular.

Robust accountability mechanisms make informational interventions more effective

Another key finding from our evidence review is that location-specific data are more likely to “move the needle” when accountability institutions are in place that allow the intended beneficiaries of a public sector organization’s services or activities to provide feedback about what they are receiving or not receiving. Politicians, bureaucrats, and citizens have particularly strong incentives to take action when data on resource allocation and service delivery are tied to political jurisdictions.

Elections provide one accountability mechanism. In the Philippines, a set of studies (here and here) chronicle how the provision of information about local development spending plans and performance increased the electoral salience of these issues during the 2013 and 2016 mayoral elections.  In 2013, voters who were provided with candidates' development spending plans were more likely to report that the issue was an important factor in their vote. In 2016, voters who were informed of a candidate's 2013 spending plan were more likely to reward incumbents who kept their spending promises. Similarly, in Sierra Leone, exposure to political debates increased voter knowledge about candidates' positions. MPs from these districts also behaved quite differently; they held twice as many meetings with their constituents and allocated a substantially larger share of their discretionary public funding to local development projects.

But non-electoral accountability mechanisms also matter. One study from Uganda compared the effects of a community participation program (consisting of meetings between health facility staff and citizens within a 5 km radius) with the effects of that very same program and the provision of easily accessible “report card” data on the performance of the health facility (including quantitative data that benchmarked the facility vis-à-vis other health facilities and a national standard of performance). Across a battery of outcomes including infant mortality, under-5 child mortality, and healthcare facility utilization, the second treatment was more effective than the first.

Without inter-jurisdictional benchmarking, the community participation program did not do much to change the behavior of frontline service delivery officials, increase local standards of care, or improve health outcomes. However, with inter-jurisdictional benchmarking, the community participation program led to the development of local action plans by citizens and service providers, which clarified what citizens could reasonably expect of service providers and provided a mechanism for bottom-up monitoring of the specific commitments made by specific health facilities.  

These are just a few of the RCTs that we reviewed. You can read our entire synthesis report here. The studies that are summarized in this report represent important contributions to our understanding of how aid can be made more accountable, efficient, and effective.

Brad Parks is the Executive Director of AidData at William & Mary. He leads a team of over 30 program evaluators, policy analysts, and media and communication professionals who work with governments and international organizations to improve the ways in which overseas investments are targeted, monitored, and evaluated. He is also a Research Professor at William & Mary’s Global Research Institute.


Ariel BenYishay is AidData’s Chief Economist and Director of Research and Evaluation, and Associate Professor of Economics at the College of William & Mary.