How Do You Measure a Process Like Localization?
November 25, 2024
When USAID issued their fiscal year 2023 progress report on localization, they reported that the percentage of direct local funding had declined from the previous year, from 10.2% to 9.6%. This drew a lot of commentary with most people giving credit to USAID for their transparency in reporting the data, while lamenting the lack of progress. I believe that the problem is not so much a lack of progress in localization, but the poor choice of an indicator to measure progress.
USAID is measuring direct funding to local organizations as their primary indicator to measure progress with localization. They are not including indirect funding of local organizations where USAID funds international or American organizations that award a significant share of those funds to local organizations. This is a critical mistake. It is more important for an increasing share of USAID funds to end up being managed by local organizations than it is for local organizations to receive those funds directly from USAID. It also disincentivizes USAID staff from using the capacity of international organizations to fund and support local organizations. Most international organizations like my own are ready to support the localization agenda, but fear that this metric will result in our being sidelined. As is often the case, cutting out the “middleman” does not always result in efficiencies, because then one must take over all the tasks the middleman was performing.
NPI EXPAND was a project heavily focused on localization and one of our primary indicators was the percentage of funds awarded to and spent by local organizations. Even though Palladium had to cover management costs, support staff, indirect rates, thought leadership activities, and capacity strengthening activities, we still managed to ensure that 54% of project funds were spent by local organizations. Unfortunately, none of that counts towards USAID’s most prominent progress indicator because the local organizations did not receive the funds directly from USAID. And NPI EXPAND is not unique in this regard. USAID has been requiring international organizations to make increasing use of local organizations in project implementation for many years. If USAID measured direct or indirect funding of local organizations, then I am confident the trend line would be much more positive.
Of course, this isn’t the only way USAID tracks localization progress. Another important measurement of localization is the number of locally led programs, and USAID has adopted a “menu” approach to measure this. USAID looks for evidence of different good practices, such as cocreation, listening tours, enabling local partnerships or use of household awards. If an activity follows a given number of these recommended practices, it is considered locally led.
All the recommended practices are indeed useful for getting input and ensuring that projects address the needs of communities. But it is hard to call programs as truly locally led if the parameters of the activities are predetermined by USAID. This is an area where USAID and most of the development industry still need to improve. If USAID defined the problem or identified the range of potential solutions, then these practices are only ways of getting input from local groups. This is a step in the right direction but doesn’t go far enough. But this is true of the Global Fund, the UN organizations, the World Bank, PEPFAR, the Presidential Malaria initiative and most major funding sources which determine the problem to be addressed before getting local input.
Funding is tied to addressing a set of well-defined priorities, and often with a set of preferred interventions. Allowing local entities to provide input on which interventions get selected from the approved menu and how they might be adapted is helpful, but it certainly can’t be called locally led. If USAID and other foreign aid institutions want to truly be localized, they should invite local entities to propose their own interventions without restrictions or earmarks for problems that they have identified and prioritized. Local leadership should include both design and implementation.
There are other dimensions to localization that have been missed in USAID’s measurement of tracking localization, such as the extent to which USAID programs facilitate opportunities for local governments to fund civil society organizations or establish public-private partnerships to advance development. Another one is obvious to veteran development workers like me. When I began my career in the late 1980s and early 1990s, it was rare to see senior staff in development organizations who were country nationals. This was true of USAID missions as well as most of their implementing partners.
When I worked in Mali in the late 1980s, not only were the Country Representative and the Deputy Representative of my NGO Americans, but staff managing projects in rural towns were also expatriates. Now that would be unthinkable, but at the time it was the norm. Over the past 40 years there has been a steady shift to localization of development organizations at all levels through hiring and promotion of development professionals from developing countries. Although my current organization, Palladium is a global company with corporate offices in Washington, London, and Brisbane, 91% of field staff are nationals. There has been a similar evolution within the World Bank and UN organizations and other USAID contractors. The days of sending expatriate staff to lead projects in foreign countries is largely over. Even if international organizations still control most development resources, the decisions about how those resources are used are increasingly being made by local country nationals as it should be.
There is still room to improve how localization is achieved and how it is measured. We should at least acknowledge the progress that has been made, including the broad consensus that localization is a worthy goal for improving aid equity, effectiveness, and sustainability.