To my readers,
You will have noticed that it has been some time since there was last an update to this page. This past year has been one of trying new things and availing myself of exciting opportunities. Unfortunately for this blog, my career has taken a bit of a turn, and I am unable to continue updating the content on the Measured Nonprofit for the time being. I hope one day to pick this back up again, because I believe deeply in helping the non-profit community develop strong performance measurement and management capacity. Until then, I hope all your measurement dreams come true!
This post was originally published June 16th, 2015 on Inside Management, a blog maintained by the New York Community Trust Nonprofit Excellence Awards, where I serve as a selection committee member.
One of the primary reasons nonprofits collect data is to report back to their funders. In fact, it is probably the only reason that many nonprofits collect data at all. But this is equivalent to stashing your money under your mattress – sure you are saving money, but you are missing out on much better ways of accomplishing your financial goals. One axiom of personal finance is to not merely work for your money, but instead to make your money work for you. The same goes for your data.
If you have ever asked evaluators or performance measurement professionals about logic models, you might have gotten the sense that they believe logic models to be the best thing to happen to the world since sliced bread. In fact, I wouldn’t be surprised if you came away from that conversation thinking that logic models are the reason sliced bread was invented in the first place!! Continue reading
I was asked recently for advice on how nonprofits can successfully respond to a foundation’s expectation for more evaluation results than they are providing funding for. My first reaction, and what I discuss in this post, is “We shouldn’t be there in the first place!” A later post will explore what nonprofits can do if they find themselves in that situation.
In The Death of Evaluation, Andrew Means writes an obituary for “traditional, social science driven program evaluation.” His second post, The Role of Data, more finely articulates his argument. This post is my reaction to both, as well as my reflections on the appropriate role of evaluation and data in applied nonprofit settings.
In my previous post, I summarized a panel discussion I hosted on information and technology in the human services sector. While the discussion focused primarily on challenges, we did discuss how the sector can better create, share, and use information to achieve greater impact in the communities we serve. This post discusses six solutions that we touched upon in the panel.
On Wednesday, April 2nd, 2014, I moderated a panel co-sponsored by NYU Wagner Graduate School of Public Service and the New York Consortium of Evaluators titled “Information and Technology in Human Services; Who’s at the Table, and How Do We Work Better Together.” The panelists were:
- Ivy Pool – Executive Director, HHS Connect at the NYC Mayor’s Office of Operations
- Marlowe Greenberg – Founder and Chief Executive Officer, Foothold Technology
- Brad Dudding – Chief Operating Officer, Center for Employment Opportunity
- Derek Coursen, Director of Planning & Informatics at Public Health Solutions
This post summarizes the first half of the conversation in which we framed the issue and discussed contributing factors. A later post will review the possible solutions that the panelists discussed. The full recording of the event can be found at the bottom of this post. The numbers in parentheses are time markers in the recording where you can locate the discussion on that topic.
In his blog post “Why Your Analytics are Failing You” on Harvard Business Review’s blog, Michael Schrage discusses the fact that no matter how much your company invests in analytic capability, you won’t reap the full benefits of that investment if it’s not aligned with the existing culture and decision making processes. His intended audience is for-profit companies, but I can’t help thinking that his thesis is even MORE critical for non-profit organizations. Continue reading
The human services non-profit sector is in the midst of a management revolution, a revolution built on measurement.
Words like evaluation, performance management, outcomes measurement, and performance based contracts are now joining the ranks of quality assurance, compliance reviews, and performance audits in the minds of nonprofit leaders. With all of these concepts flying around, many non-profit leaders don’t know the difference between them; they just want to be running effective programs! So let’s say you want to get in on all of this ‘outcomes’ stuff – where do you begin? What does this all even mean?
This post is my attempt to cut through the confusion, and define some of the main differences among the different ways of measuring nonprofits.