In The Death of Evaluation, Andrew Means writes an obituary for “traditional, social science driven program evaluation.” His second post, The Role of Data, more finely articulates his argument. This post is my reaction to both, as well as my reflections on the appropriate role of evaluation and data in applied nonprofit settings.
The human services non-profit sector is in the midst of a management revolution, a revolution built on measurement.
Words like evaluation, performance management, outcomes measurement, and performance based contracts are now joining the ranks of quality assurance, compliance reviews, and performance audits in the minds of nonprofit leaders. With all of these concepts flying around, many non-profit leaders don’t know the difference between them; they just want to be running effective programs! So let’s say you want to get in on all of this ‘outcomes’ stuff – where do you begin? What does this all even mean?
This post is my attempt to cut through the confusion, and define some of the main differences among the different ways of measuring nonprofits.
This post originally appeared on 2/19/14 on Ann Emery’s Evaluation Blog
This is the third of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and is based on a talk at Eval 13 by the same name. Last post, I wrote about engaging people within the organization to support evaluation sustainability.
Nonprofits can be crazy places to work and it’s a rare day when I feel like I am not being pulled in five directions at once. Within the myriad of competing priorities that non-profit leaders have to deal with, evaluation-capacity is generally low on the priority list. And this is unlikely to change. So if we want evaluation capacity to ‘stick’, we should not try and compete with those other priorities, but instead integrate evaluation into the very systems that otherwise compete for leader’s attention. What do I mean by that? Continue reading
This post was originally published on 1/29/14 on Ann Emery’s Evaluation Blog
This is the second of a three part series on how internal evaluators can build their organization’s evaluation capacity and is based on a talk at Eval 13 by the same name. Last post, I wrote about starting from scratch when you first begin evaluation capacity building efforts.
Jim Collins, in his seminal work of business management Good to Great, talks about the ‘flywheel effect’. If you aren’t familiar with it, take a few minutes to read this or better yet, buy his book. Sometimes in the early days of building evaluation capacity, it can feel like you are trying to push a building up the block, and it isn’t until a year or two in that you look back and realize you have actually gotten somewhere! But how do we create self-sustaining momentum around evaluation capacity? I break it down into two buckets: engaging people, and engaging systems. This post will be about engaging people, the next one will be about engaging systems. Continue reading
This post was originally posted on 12/18/13 on Ann Emery’s Evaluation Blog
This is the first of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and sustainability and is based on a talk at Eval13 by the same name.
Any evaluator, internal or external, working to incorporate evaluative practices into nonprofit organizations must engage a wide variety of staff and systems in the design, implementation, and management of those practices. The success of those efforts will be decided to a large extent by how non-evaluators are brought into the evaluation tent, and how evaluation is integrated into administrative and service delivery systems. But how do we even begin?