A Rejoinder to Rubbery Numbers on Chinese Aid

In this post, we respond to Deborah Brautigam’s review of our Chinese development finance data collection project.

May 1, 2013
AidData

In this post, we respond to Deborah Brautigam’s review of our Chinese development finance data collection project. Our initiative is premised on the idea that we should open our data and methods to criticism in order to improve them. To this end, we provided all users with a methodology that describes all our procedures and methods. In combination with the publicly released dataset, the methodology allows any user to identify the source of any error and offer a specific and actionable corrective directly to AidData staff via the public portal. We thank Brautigam for being one of the first to provide feedback, and hope that in the coming days we will receive suggestions about specific records in the database that need to be re-examined. We will certainly integrate corrected records and credit the users who provide such valuable feedback. Project #28153 is a case in point. Brautigam identified an error and we corrected the record.

At the same time, several of Brautigam’s critiques of our database merit closer scrutiny. We offer some initial responses to her comments below that we hope will become part of a broader multi-stakeholder dialogue.

First, Brautigam has several concerns about our research methods. She states that ”reliance on media reports for data collection on Chinese finance is a very dicey methodology.” But in the same blog post she marshals evidence from media reports to challenge individual records in our database, which suggests that media reports, though imperfect, are often the best means available to track Chinese development finance. We are not wedded to the exclusive use of media reports, and those who carefully read our methodology can see for themselves how we vet and refine the data through triangulation of multiple types and sources of information.

Second, Brautigam asserts that “[t]he main problem is that the teams that have been collecting the data and their supervisors simply don't know enough about China in Africa, or how to check media reports, track down the realities of a project, and dig into the story to find out what really happened.You can start with media reports, but it is highly problematic to stop there.” We agree that one needs to triangulate many sources of information for each project over the course of its lifecycle. That is in fact what our methodology requires. It augments data drawn from journalistic accounts with case study scholarship, information and insights from experts (including the project-specific information made available through Brautigam’s blog), project-level data collected by other researchers, and official government data. The “digging” that Brautigam calls for is in fact a central part of our approach.

Third, Brautigam seems to think that it’s a bad idea to publish this dataset and expose our methods and sources to public scrutiny. She says “[d]ata-driven researchers won't wait around to have someone clean the data. They'll start using it and publishing with it and setting these numbers into stone.” People may abuse the data, but we disagree that this is a good reason not to publish the data. We have been systematically collecting project-level development finance data for the better part of the last ten years, and we find errors in the official data all the time. You cannot fix errors until you know that they exist, and we believe that more sunlight and scrutiny is the best way to spot and fix errors. Brautigam’s arguments seem to suggest that only a small group of people who she considers to be experts should be allowed to collect and analyze data on the nature, distribution, and effects of Chinese development finance. We disagree with this “gatekeeper” approach to social science and expect that it will slow progress in this narrow sub-field of the academy.

Fourth, media-based data collection offers some advantages not mentioned in Brautigam’s blog post. During the course of our work, we learned that media-based data collection can help uncover projects that never enter official reporting systems. In Malawi, for example, the Ministry of Finance maintains a database of all incoming aid flows. This database lists only two Chinese-funded projects worth $133 million from 2000 to 2011. AidData’s media-based methodology captured these two projects, but it also uncovered an additional 14 Chinese projects during the same period of time.These 14 projects constitute an additional $164 million. One cannot help but wonder if Malawi’s Minister of Finance would consider these 14 additional projects uncovered through media-based methods “rubbish.”

Fifth, it appears that we have a difference of opinion with Brautigam regarding how knowledge is generated, refined, and improved. Any student in an introductory research methods course learns that a defining feature of the scientific process is the need to expose their methods and data to public scrutiny. If your methods, procedures, and inferences are not transparent and replicable, then you are not making knowledge claims based on scientific principles. Consider the process leading up to the publication of our CGD working paper. Brautigam reviewed an early version of our paper and provided constructive suggestions for improvement, which we in turn used to make improvements to the methodology and the dataset. One of her most valuable suggestions was to systematically collect data on the status of each project in the database based on available sources. Users of the database should be able to determine if a project has been “promised” but no formal legal commitment has been made; or if a commitment has been made, but the project has not yet been implemented or completed; or if a previously pledged or committed project has been cancelled or suspended. Users can now do this because we exposed our data to scrutiny and made improvements to our methodology based on external feedback. Going forward, if we have erred in any individual case or if the media report on which the coding decision was made is in error, we have designed our online database platform at china.aiddata.org so that it is easy for users to bring errors to our attention and request corrections. We acknowledge that our methodology is imperfect and that our database of nearly 1,700 projects may contain some errors, but we also reject the idea that keeping the database from the light of day will somehow improve its accuracy.

We also disagree with Brautigam’s comparison of our methodology to the Wagner School project. Previous media-based data collection efforts have run into major challenges: heavy reliance on individual sources (particularly English language news sources), insufficient attention to duplicate projects, over-counting as a result of not following projects from announcement to implementation, and opaque methods and sources. We took great care to learn from these past mistakes. While initial skepticism of a new media-based data collection initiative is understandable, our project should be judged on its own merits. The appropriate solution should be to judge the current study, and if/when specific errors are identified, critics ought to make those specific mistakes public. This is how most other big data collection projects in the social sciences have improved and refined their methods and their data over time. We encourage all readers to take a closer look at our published methodology , dataset, and working paper before judging its quality in relation to other studies.

Sixth, regarding the “megadeals” that Brautigam dismisses as invalid, we too recognize that many of these deals haven’t actually happened, which we have indicated in the “Status” column of the very table that she cites.This is why we have placed so much emphasis on the “Project Status” variable, and warn users against assuming that all records in our database are created equal. If a particular user of the data only wants to analyze projects that have been implemented or completed, we have included a set of variables that will allow the reader to filter the dataset in this way.

Seventh, Brautigam takes issue with our findings on the top recipients of Chinese official financing, but it seems possible that she is referring to an earlier, unpublished version of our work. Both Angola and Ethiopia are on our current list of top recipients.The working paper lists in order the top recipients of official finance commitments in our database as Ghana, Nigeria, Sudan, Ethiopia, Mauritania, and Angola.

Eighth, Brautigam suggests that our team may not be suited to perform this type of research, stating, “[t]his is not research that can be done by just anyone, and especially not by only looking at media reports.” We agree that experts who dedicate their careers to studying China and Africa should help generate, vet, publish, and analyze data. This could potentially be a great example of what Gary King of Harvard calls for -- a collaboration between social scientists and area experts. Our main priority moving forward is to solicit as much specific input from area experts, practitioners, and journalists to add sources and correct errors resulting from imperfect media reports. We take exception to the notion that media-based research and expert qualitative fieldwork are incompatible methods for studying development finance. Ultimately, it should not matter who is performing the research so long as they can produce useful results. And, in our judgment, the easiest and most effective way to do this is by consolidating data in one place using a common set of rigorous and transparent methods. People who study economic growth rely on Penn World Table. People who study inter-state war rely on the Correlates of War (COW) dataset. People who study democracy rely on the POLITY database. Yet researchers who study the nature, distribution, and impact of Chinese development finance activities have not previously been able to benefit from an analogous database. This is a major impediment not only for researchers seeking to make descriptive or causal inferences, but also for policymakers who want to know where resources are flowing and for what purpose. While we recognize that our database is not yet analogous to these authoritative data sources, we hope that with specific and constructive feedback from experts such as Brautigam, we will be able to create a public good that will enable researchers and policymakers to advance our collective understanding of Chinese development finance.

Finally, Brautigam claims that AidData’s attempt to crowdsource Chinese development finance information will not work. This assertion is a testable hypothesis. With time, others will be able to judge whether we succeed in generating higher-quality and higher-resolution data over time. It is interesting to note that within 24 hours of the china.aiddata.org site being launched, users began to provide new information about specific projects in the database. The Guardian produced field reports on four of the projects in our database (here, here, here, and here).

Going forward, we invite researchers, policymakers, development practitioners, journalists and civil society organizations to join us in an effort to expand and enrich the online database at china.aiddata.org. We also hope that users will engage us and help us to refine the methodology, as it may very well have applications that extend far beyond Chinese development finance to Africa. We think there are some potentially big payoffs here for those who study the activities of DAC and non-DAC donors. We understand that our methodological approach will have its critics, but we hope that people will work with us to improve it. We urge readers to review to actual methodology, rather than caricatures of it.

No items found.