For most advancement and development programs, defining success is rather straightforward. At the end of the year, we might point to the total amount of gift income received. Or the number of donors who gave. Or the number of people who attended events. Or the number of new planned giving commitments that were formalized. The list could go on and on.
These standard evaluation measures (and most other evaluation metrics a program might choose to use) are similar in at least two ways: they are quantitative and they capture the transactions of our work. While focusing on numbers is uncomplicated and concentrating on the activity is easy to understand, these choices do not provide a full or accurate assessment of the most important outcomes we should be seeking to achieve.
What if, instead of simply focusing on the raw number of dollars, or donors, or attendees, or other standard measures, we also regularly assessed and tracked the outcomes that really matter to help gauge our success?
For instance, outcomes such as:
- Are recent donors becoming more or less educated about the mission they are supporting?;
- Are recent donors more or less likely to give again in the future? And, if they are likely to give again, would they consider giving more or less?;
- Are governing Board members reporting that their involvement and support of your mission is growing more or less meaningful to them?;
- Are event attendees feeling more or less inspired about the impact of your mission after attending the event?;
- Are event attendees more or less likely to return to another event and, therefore, become more engaged?;
There could be, of course, other helpful evaluation questions we could regularly ask. The common theme, though, is that these questions would assess how educated, how engaged, how inspired people are becoming based on our work and efforts.
The difficulty in generating this data on an annual basis would not be high. Even for a modestly-resourced advancement team, a well-crafted, online, self-report survey with only a few, key questions could be randomly distributed to various constituents at little cost and with nominal effort. Once the wording of the survey questions was agreed to, they would remain consistent year after year to compare and track trends.
Our standard approaches to advancement and development evaluation are not wrong-headed necessarily. They are just incomplete. If we want to transform how we work and deepen the impacts we are making, identifying and assessing outcomes that truly matter is a good first step.
This article was originally published in our September 2022 Gonser Gerber E-Bulletin. To learn more about our Bulletin, or to subscribe to our mailing list, visit our website https://www.gonsergerber.com/services/institute/bulletin/.