GiveWell fiasco — what’s different?

Well, the time to start an organization related to “charity ratings” in any way: not right now. GiveWell, mentioned in an earlier post, has admitted to astroturfing charges. (Astroturfing is when you pretend that there’s a grassroots supporters of your business or cause. Basically, one of the founders asked questions under a fake name and then answered them under a different name.) What a mess, and what a shame.

Perhaps this episode highlights the key distinction between GiveWell’s approach: rather than think that we know better than the nonprofits themselves how to solve the problems they’re working at every day, we believe that unlocking and sharing their knowledge, and revealing that to donors in a coherent manner, will reveal the better ways to solve the problems. What does that actually mean? It means that while we think we know better than most how to present and compare information, we’re not likely to try to substitute our judgment for someone else’s — which makes the GiveWell activity much less likely for us just in terms of underlying incentive. In other words, when you think you’re smarter than other people, you start wanting to make decisions for them. Our approach is founded in the most historical American notions of freedom: the freedom for everyone to make up her own mind. Also, we have specifically decided to preserve the strategic element in our comparisons by only discussing & comparing organizations in each strategy within a sector.

By presenting the information we receive from organizations in a standard format, we hope to work from the other side of the river from WhatWorks.org, building a bridge that should eventually join the two projects. While they are creating standards and forms, an idea we support, we recognize that harnessing the wisdom of the organizations by the organizations has immediate potential for impact. In other words, isn’t a nonprofit more likely to adopt a strategy knowing it has worked elsewhere than through a standard or form that describes or implements that same strategy? It shouldn’t be that way, but it is.

Operational excellence is evolutionary — it is refined by recurring methodical review of what went well, what went wrong, and what should be changed as a result. Our goal is to capture information organizations in different stages of those reviews, collate it into a cohent body of work, and share it openly, allowing it to be review and commented on by the community. Why should there be a community approach rather than a top-downish (and we don’t mean that in a perjorative way) approach like Whatworks.org? The right answer is that the system needs both. We’ve always assumed that after the collection of “practices” in a sector, we would actually draw out the “best practices” identified by the community, create a workable plan primarily by ourselves, and then expose that plan to the community for further comment.

There is a lot of discussion about evaluations and their cost. That problem is one triggered by the top-down approach. We’re saying something similar and something different. The first point is that we absolutely agree that evaluations have to take place or we all risk wasting an awful lot of time. The second is that it makes sense, from an efficiency perspective, to see what organizations are doing themselves to evaluate their efficiency and effectiveness. That much the GiveWell folks have right: the proxies often touted are primarily useful for identifying waste, not for identifying true program impact. If that weren’t true, then why would the Robin Hood Foundation expend the time and money to create an evaluation system? Wouldn’t they just use the financial data umbrella sites? The fact that those smart people with money to spend, just like similar programs at other large foundations, want to spend it wisely. Some of them, more than one, have decided to evaluate impact in some way. We want to support that by making it easier, rather than harder, for nonprofits to engage donors in those evaluation discussion.