I was asked recently for advice on how nonprofits can successfully respond to a foundation’s expectation for more evaluation results than they are providing funding for. My first reaction, and what I discuss in this post, is “We shouldn’t be there in the first place!” A later post will explore what nonprofits can do if they find themselves in that situation.
In my previous post, I summarized a panel discussion I hosted on information and technology in the human services sector. While the discussion focused primarily on challenges, we did discuss how the sector can better create, share, and use information to achieve greater impact in the communities we serve. This post discusses six solutions that we touched upon in the panel.
In his blog post “Why Your Analytics are Failing You” on Harvard Business Review’s blog, Michael Schrage discusses the fact that no matter how much your company invests in analytic capability, you won’t reap the full benefits of that investment if it’s not aligned with the existing culture and decision making processes. His intended audience is for-profit companies, but I can’t help thinking that his thesis is even MORE critical for non-profit organizations. Continue reading
This post originally appeared on 2/19/14 on Ann Emery’s Evaluation Blog
This is the third of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and is based on a talk at Eval 13 by the same name. Last post, I wrote about engaging people within the organization to support evaluation sustainability.
Nonprofits can be crazy places to work and it’s a rare day when I feel like I am not being pulled in five directions at once. Within the myriad of competing priorities that non-profit leaders have to deal with, evaluation-capacity is generally low on the priority list. And this is unlikely to change. So if we want evaluation capacity to ‘stick’, we should not try and compete with those other priorities, but instead integrate evaluation into the very systems that otherwise compete for leader’s attention. What do I mean by that? Continue reading
This post was originally published on 1/29/14 on Ann Emery’s Evaluation Blog
This is the second of a three part series on how internal evaluators can build their organization’s evaluation capacity and is based on a talk at Eval 13 by the same name. Last post, I wrote about starting from scratch when you first begin evaluation capacity building efforts.
Jim Collins, in his seminal work of business management Good to Great, talks about the ‘flywheel effect’. If you aren’t familiar with it, take a few minutes to read this or better yet, buy his book. Sometimes in the early days of building evaluation capacity, it can feel like you are trying to push a building up the block, and it isn’t until a year or two in that you look back and realize you have actually gotten somewhere! But how do we create self-sustaining momentum around evaluation capacity? I break it down into two buckets: engaging people, and engaging systems. This post will be about engaging people, the next one will be about engaging systems. Continue reading
This post was originally posted on 12/18/13 on Ann Emery’s Evaluation Blog
This is the first of a three part series on how internal evaluators can think about building their organization’s evaluation capacity and sustainability and is based on a talk at Eval13 by the same name.
Any evaluator, internal or external, working to incorporate evaluative practices into nonprofit organizations must engage a wide variety of staff and systems in the design, implementation, and management of those practices. The success of those efforts will be decided to a large extent by how non-evaluators are brought into the evaluation tent, and how evaluation is integrated into administrative and service delivery systems. But how do we even begin?