One senses some pent-up frustration early in Catholic University of America (CUA) history professor Jerry Z. Muller’s new book, The Tyranny of Metrics. As chair of CUA’s history department, he had to satisfy a regional accreditor’s demands for more data as part of the university’s crucial re-accrediting process.
“Soon, I found my time increasingly devoted to answering queries for more and more statistical information about the activities of the department,” Muller writes, “which diverted my time from tasks such as research, teaching, and mentoring faculty.” There were new scales for evaluation, adding no useful information to the old measuring instruments. Then there were data specialists and reports with spreadsheets, pursuing the most-recent managerial “best practices.” “[D]epartment chairs found themselves in a sort of data arms race,” he laments.
Unless you’re maybe a systems analyst or accountant, perhaps you’ve felt a similar frustration in your field or where you work.
Muller was able to research and write a very engaging and accessible book about it. The Tyranny of Metrics lets those who’ve felt the same frustration know they’re not alone. He describes “metric fixation” in several contexts, with short exemplary case studies—including from college and universities, schools, medicine, policing, the military, business and finance, and philanthropy and foreign aid. Depending on what you do, maybe you could add a case-study area or two.
In most fields, there are things that can be measured but might not be worth measuring, Muller readily, and repeatedly, concedes in the book.
But what can be measured is not always worth measuring; what gets measured may have no relationship to what we really want to know. The costs of measuring may be greater than the benefits. The things that get measured may draw effort away from the things we really care about. And measurement may provide us with distorted knowledge—knowledge that seems solid but is actually deceptive.
From the study of all his areas, Muller lists specific recurring flaws of metric dysfunction:
1. measuring only that which is the most easily measurable;
2. “measuring the simple when the desired outcome is complex;”
3. “[m]easuring inputs rather than outcomes;”
4. “[d]egrading information quality through standardization” (“nothing does more to create the appearance of certain knowledge than expressing it in numerical form”);
5. gaming through “creaming,” to make it easier to reach the numeric goal;
6. “[i]mproving numbers by lowering standards;”
7. “[i]mproving numbers through omission or distortion of data;” and,
8. plain old cheating. (pp. 23-25)
These all generally sound like quite-familiar challenges. From my professional experience in philanthropy—there sure seems to be, “unintended negative consequences of trying to substitute standardized measures of performance for personal judgment based on experience.”
Muller examined the trend of charitable foundations measuring and publicizing the percentage of recipient charities budgets that are devoted to administrative and fundraising costs—“overhead” or “indirect” expenses” —as opposed to its activities or programs. The online GuideStar service helps givers do this, for example, and individual givers sometimes demand this data on their own, as well.
“What gets measured is what is most easily measured, and since the outcomes of charitable organizations are more difficult to measure than their inputs, it is the inputs that get the attention,” he concludes. “For most charities, equating low overhead with higher productivity is not only deceptive but downright counterproductive. … [T]he assumption that the effectiveness of charities is inversely proportional to their overhead expenses leads to underspending on overhead and the degradation of organizational capacities ….”
Someone, preferably someone with or informed by Muller’s common-sense outlook, should conduct similar studies of the use of metrics in philanthropy, overall or programmatically. All eight of his specific recurring flaws occur there and the nonprofit sector would benefit overall from such new knowledge.
A stronger initial insistence on metrics might have prevented the kind of patient conservative grantmaking that gave rise to the successful school-choice and charter-school reforms at the K-12 level, for instance. Data “proving” progress didn’t come quickly; luckily, patience in the face of demands for data provided an opportunity for the policy to advance, albeit incrementally.
From the same standpoint of a conservative policy-oriented giver, have metrics yielded success in furthering higher-education reform, work-based welfare reform, individualized Social Security accounts, free-market environmentalism? How about limited government and the rule of law? Have they helped change the culture, which some givers see as their aim?
If so, how? If not, why not?
Questions and reminders
Muller provides a checklist of questions and reminders, either for any such ex post study or for any prospective use of metrics by and for givers:
1. “[w]hat kind of information are you thinking of measuring?;”
2. “[h]ow useful is the information?;”
3. “[h]ow useful are more metrics?;”
4. what are the encumbering “costs of not relying upon standardized measurement?;”
5. “[t]o what purposes will the measurement be put, or to put it another way, to whom will the information be made transparent?;”
6. “[w]hat are the costs of acquiring the metrics?;”
7. “[a]sk why the people at the top of the organization are demanding performance metrics;”
8. “[h]ow and by whom are the measures of performance developed?;”
9. “[r]emember that even the best measures are subject to corruption or goal diversion;” and, …
10. “[r]emember that sometimes, recognizing the limits of the possible is the beginning of wisdom.”
(All emphases in original.)
Givers should check this list, before they prepare and then dutifully fill out and submit—or have others fill out and submit—any more required forms and attached spreadsheets.
A non-numericizable best practice
Finally, and most valuably, Muller’s frustration-borne Tyranny of Metrics reminds us of the importance of a decidedly non-numericizable best practice.
It is not uncommon for those who give their own money away, or who work for those who do, to quite confidently make assertions of truth or announce judgements on the basis of what they know, or think they know. Their very position allows for this. For any human, it is a strong temptation. It encourages arrogance.
It is not unfair to ask them, or for them to ask themselves, for actual evidence supporting their assertions—including of the numerical sort.
In what is too often their pretense to truth, however, metrics can be arrogant, as well. As Muller shows, the numbers and those who push the numbering, sometimes need the same thing all humans do, too.