Development Evaluation at a Key Inflection Point

Author(s): 

Is development evaluation at an inflection point in 2017? Jane Reisman and Veronica Olazabel, authors of the newly released Rockefeller Foundation Report Situating the Next Generation of Impact Measurement and Evaluation for Impact Investing, say it is. Reisman and Olazabel make a persuasive case that development practitioners are at a turning point where development approaches need to evolve based on existing options that join both program evaluation and social impact measurement. This proposed convergence could be catalytic, they argue, if the development community is able to mobilize the investment of the private sector to address “the estimated $2.5 trillion shortfall required to move the needle on social and environmental challenges, inequity and threats to sustainability.”

What makes this inflection point unique is how this coming revolution in methods will elevate the evaluator’s role in development projects. To capture social value requires evaluators at the table focusing on impact at the program design stage, not just evaluating from the outside the degree to which objectives have been accomplished at a later date. It is this increase in evaluability that can potentially catalyze the ability to achieve development outcomes that have been rather disappointing since program evaluation was invented some 50 years ago. This pivot places the evaluator in new roles alongside managers and investors in program design.

Can development evaluators indeed help reinvent development work? In addition to the methodological challenges that Reisman and Olazabel identify, the development evaluation community needs to acknowledge that it also faces two other key challenges: (1) continuing stovepiped disagreements about how to measure “impact” and (2) persistent gaps in evaluation capacities both in the development community and among the beneficiaries.

A decade after World Bank economists François Bourguignon and Mark Sundberg recommended that the best way to answer the question of aid effectiveness is to step away from purely econometric approaches to measuring impact, the question of “what works” still remains elusive. Bourguignon and Sundberg stressed the need to focus on “the [causal] links from aid to final outcomes” – a central feature of traditional program evaluation at both the project levels and the strategy or policy levels. Yet, a series of Obama administration reviews of evidence-based policy resources for the U.S. Commission on Evidence-Based Policy finds that among the U.S. federal agencies evidence building remains highly decentralized, data gaps persist, divergent agency goals and methods complicate systematic evidence collection, and federal agencies vary considerably with regard to their evaluation capabilities. These challenges persist despite the fact that the U.S. boasts one of the oldest and the largest professional evaluation associations in the world, the American Evaluation Association, 1 with over 7,500 members and a national commitment to performance measurement that dates at least from the early 1990s.

More than a decade after the Organization for Economic Cooperation and Development (OECD) issued the 2005 Paris Declaration, which focused on “managing for results,” these same gaps also exist in international development. This is true despite recent efforts of major donor agencies to reorient and coordinate evaluation approaches around common goals. The United Nations (UN) has obtained agreement on 17 Sustainable Development Goals (SDGs) which came into force January 1, 2016 to end poverty, fight inequality and address climate change. Yet, as a review by the Center for Global Development (CDG) shows, not all of the SDG indicators are ready for “primetime.”

In 2015, the National Endowment for Democracy, which has supported democracy building since its inception in 1983, released a book Democracy in Decline? In it, dueling scholars provided opposing assessments of whether the world has entered a "democratic recession," whether there were measurement problems, or whether democracy had not declined at all. This is an older evaluation paradox that reminds one of CDG’s David Roodman’s “Guide for the Perplexed” where, in assessing aid effectiveness impact assessments, he asks “How can smart people draw such contradictory conclusions from the same data?”

Of course, evaluation is more than indicators, as Robert Picciotto, the former Director-General of the World Bank’s Independent Evaluation Group, stressed when he decried “indicator fetishism” at the joint 2016 AEA and IMPCON conference, addressing “the design challenge for evaluators in a brave new world of social impact.” This is a point also made by global education evaluation expert Steve Klees in his 2015 critique of “measurement fetishism.” The Donor Committee for Economic Development (DCED) Standard for Measuring Result in Private Sector Development Work provides a midway point where monitoring of indicators is done “up to the impact level where possible.” The joining of management and evaluation functions does not eliminate the need for an external evaluator, but it does mean that managers and investors need to either work with evaluators using an internal independent model innovated by the multinational development banks (the 2012 “Big Book”) OR gain evaluation expertise themselves in ways that reflect on-the-ground realities.

In growing recognition of this need, the UN declared 2015 the Year of Evaluation. This initiative seeks to create a global evaluation enabling environment that goes beyond donors, to incorporate the SDGs in country level evaluation policies, and to build evaluation capacity across the globe. Other initiatives abound. In 2014, the United States Agency for International Development (USAID) established its Global Development Lab to catalyze innovation in evidence-based approaches beyond traditional M&E. Funded by the Bill & Melinda Gates Foundation, the UK Department for International Development, and the William and Flora Hewlett Foundation, the International Initiative for Impact Evaluation (3ie) created in 2008 has launched a series of systematic reviews and impact evaluations to go beyond “what works” to examine “why” and “under what circumstances.” 3ie joins the Campbell Collaborative, a British organization named after Donald Campbell, the American “father” of experimental design and author of the “Experimenting Society” where he argues that evaluation and evidence is linked to key features of democratic societies.

As a result, there are some encouraging improvements in evaluation capacity in developing countries. While a survey of 115 countries in 2013 found only 20 have a formal evaluation policy, the UN 2015 Year of Evaluation EvalPartners group found that the number of voluntary organizations of professional evaluators (VOPEs) across the world had doubled in the last decade, growing from only about 45 in 2006 to 91 verified associations in 2016. Now there is also the first long-term global vision for evaluation in the Global Evaluation Agenda (GEA) 2016-2020.

If an inflection point is a turning point, then to capitalize on the opportunity that Reisman and Olazabel identify requires more collaborative work. This includes both work among evaluators who need to integrate methods beyond the current stovepipes that promote idiosyncratic approaches, and between managers, donors and investors who must start to develop projects with outcomes and evaluation in mind at the outset.

The Center for International Private Enterprise (CIPE) is one of the development organizations that faces evaluation challenges every day. While CIPE has been a leader since the 1980s in using market systems approaches to impact assessment that focus on systemic change and reform, and in using an internal independent model of evaluation alongside external evaluators – two elements now recognized as key elements of high performing evaluation practices, CIPE’s next challenge is to increase collaboration with other development organizations using shared standards of measurement. CIPE continues to work toward linking program development with assessment in its core areas of organizational and think tank capacity-building, in innovating new approaches to entrepreneurial ecosystems in developing economies, and in developing new tools to build the evaluation capacity of its partners, including digital tools. Evaluation capacity is not just a technical exercise. It is about problem-solving and local agency and ownership of locally-driven change that builds democratic leadership. CIPE looks forward to working with evaluation and development peers to break down stovepipes that limit innovation.

Denise L. Baer, Ph.D., is the Senior Evaluation Officer for CIPE where she serves as CIPE’s principal internal expert on evaluation and acts as a resource to build evaluation capacity among CIPE staff and partners. Prior to joining CIPE, Dr. Baer has been a consultant providing research, governance and social science consulting for a variety of federal agencies and nonprofit organizations, including international work for USAID, National Democratic Institute, IFES and the Institute for Women’s Policy Research. She has over 25 years of experience teaching at major universities including Georgetown University McCourt School of Public Policy, Boston University, and Johns Hopkins SAIS and she has worked for the U.S. Congress and the Congressional Research Service. In addition to her methodological expertise, she is also an expert in democratization, legislative strengthening and gender programs. Dr. Baer is the author or co-author of three books and numerous scholarly articles which use experimental, survey and qualitative data, and she is completing work on a book, Delivering Measurable Performance: Performance Evaluation Methods, Strategies and Tools for Policymaking and Public Management, under contract with SAGE Press. Baer holds a B.A. from the University of Illinois and earned her Ph.D. in political science from Southern Illinois University-Carbondale. She is a member of the American and International Political Science Associations, American Evaluation Association, Washington Evaluators, Society for International Development, and the National Press Club and serves on the board of the National Women’s History Project.

End Notes:

1 The AEA was formed in 1986 out of a merger of the Evaluation Research Society (1979) and the Evaluation Network (1982). The Canadian Evaluation Society was organized in 1981. The Australian Evaluation Society was organized in 1987 and the European Evaluation Society formed in 1994.
Publication Type: 

CIPE

Center for International Private Enterprise
1211 Connecticut Avenue, NW, Suite 700
Washington, DC 20036
Tel: 202-721-9200    Fax: 202-721-9250
Privacy Policy Board Login