Originally published on the CEP blog. The Center for Effective Philanthropy (CEP) is a nonprofit organization focused on the development of data to aid philanthropic funders in defining, assessing, and improving their effectiveness – and, as a result, their intended impact.
How can funders better inform their own assessment of progress and earn their – and others’ – confidence in any impact achieved?
That’s the question that jumped out at us here at the Center for High Impact Philanthropy after reviewing CEP’s recent report, How Far Have We Come? Foundation CEOs on Progress and Impact. In it, CEP shares findings from its survey of 211 foundation CEO’s. Respondents acknowledged that their assessments of their progress might not be as well informed as they could be and that they saw opportunities for changes in practice in general.
Our suggestion: Broaden your definition of evidence.
For some funders, evidence of impact means the results of a rigorously designed evaluation, ideally a randomized controlled trial. For other donors, their own observation – I know it when I see it – is all the evidence they need. Others rely solely on grantees’ reports. For all of these funders, broadening their definition of evidence will not only better inform their assessments of their current work, but also justify confidence in their strategies and results.
What does a broader definition of evidence look like in practice? For us, ‘evidence-based’ means accessing the best available information from three different sources, or circles of evidence:
Research or scientific evidence, such as the results of randomized controlled trials (RCTs) and statistical models designed to prove cause and effect.
Field experience such as the practical knowledge of beneficiaries and program providers. These insights help explain how programs work in real-world conditions, when human behavior and implementation challenges come into play.
Informed opinion such as the views of policymakers or other stakeholders whose perspectives provide context for evaluation results and field experience.
Each source has its strengths and limitations, but the more all three point to the same conclusions, the stronger the assessment and the more justified a funder’s confidence in that assessment.
For example, suppose a foundation wants to help families avoid home foreclosures in tough economic times. If RCT results, government data, interviews with nonprofit housing counselors, and conversations with beneficiary families all point to progress from a particular program, a funder can and should feel confident in the impact of that approach. The evidence from research, the field, and informed opinion all point to success.
Conversely, it’s hard to feel confident about a grant without seeking input from all three circles. Case in point: Only 60% of the CEO’s surveyed in CEP’s study said they collected input from beneficiaries… but those who did were more likely to believe that their strategies were effective and that they contributed to more progress. They had greater confidence, with good reason.
Depending on how much of these sources you already tap, putting these principles into practice can be as simple as asking your beneficiaries a question, or as complex as funding advanced data collection infrastructure and independent evaluations of all your grantees.
The good news is that we live in an information-rich age; it has never been easier or cheaper to access our society’s collective knowledge. Organizations like the Coalition for Evidence-Based Policy are piloting very low-cost RCTS leveraging available administrative data, Frontline SMS is showing how mobile technology can bring real-time beneficiary input from far-away places, and as CEP’s own report notes, there are more and more players collecting and synthesizing information for all funders to use.
Armed with a broader definition of evidence, if you’re seeking more confidence in your impact, seek and ye shall find.