In 12 earlier parts of this series, Claire Sommer and I developed 22 pitfalls in the sustainable business metrics field, based on the experiences of many mostly non-business fields. (Find them here.)
In the most recent article in this series, I looked at five sustainability metrics anecdotes from my experiences at the New Jersey Department of Environmental Protection (NJDEP). My hazy intuition at the time set the initial path for this series’ “Pitfalls” perspective about how we sometimes create new problems in our use of metrics (even as I still value them as tools).
To revisit, back in the mid-1990s a constituent praised sustainability indicators developed by New Jersey state government as adding order and a focal point for action to the sustainability challenge: (Now we just) “have to meet the numbers.”
At a more recent NJDEP presentation about its Sustainable Business Initiative, which encourages companies to take sustainability actions, a list of potential actions eliminated more ambitious ones if they could not (at least at that time) be measured.
These sustainability leaders and more ...
Hear insights from sustainability leaders from Unilever, Patagonia, Interface, Microsoft, Nestlé and many more at the return of our live (and virtual) flagship event, SB'21 San Diego — October 18-21.
These seemed like an example of the cart (measuring performance) preceding the horse (sustainability actions). But it was unclear if that was an actual problem and if the sequence even matters. It took years to figure out it does.
Pitfall 23: Don’t forget that metrics are the map, not the territory. These are not the same thing.
Regarding the first, and analogizing from K-12 education testing (which we discussed in Part 1), Arianna Huffington described what she called our obsession with testing: “It’s as if the powers that be all decided a check-up is as good as a cure.” In other words, Huffington is critiquing looking good on the test (or check-up), while placing less focus on the cure, actually making real educational progress (or, in our sustainable business world, actually getting somewhere in pursuing sustainability).
Kevin Moss asked a similar question about making the test primary: “…by focusing on the score, are we losing underlying value and meaning?”
Now in practice, there may be little operational difference between efforts aimed at improving performance as heavily influenced by the numbers to which you know you will be held accountable, versus focusing on achieving sustainability that just happens to be measured by numbers. They may take you to the same place.
However, it is worth recognizing that numbers are only a “best case in time” approximation to understanding the pursuit of sustainability. Over time, an increasing sensitivity (if such a culture is encouraged) should lead to the identification of gaps between how sustainability pursuits are seen today and where the company has to go, perhaps leading to better metrics that more fully express the challenge. There is a risk of a subtle loss of attention and delay in climbing the sustainability learning curve. Plus, like a hiker mistaking a cheesesteak smudge for a ridge, the map may hold imperfections in representing what is being mapped that can be overlooked. This is especially true if you don’t know you’re looking at a map.
In the second case, what was lost was the possible consideration by companies of more sweeping actions that, had they been listed in a “Guide,” would more likely have been undertaken, with or without an eventual way to measure them.
There seems to have been a wholly unfortunate devolution of metrics from a means to measure performance to becoming a brake on possible activities. It’s particularly a shame when this organization’s Vision Statement — “protection of the air, waters, land, and natural and historic resources of the State” — holds the promise for much more.
Pitfall 24: When communicating the results of sustainable business metrics, don’t just cherry-pick the good ones.
The NJDEP promised that indicators would tell the public how they’re doing; the good — as well as the bad — and “there would be one less thing to argue about.” Things did not work out that way, as each side cherry-picked indicator results favorable to its (surprise!) pre-existing and opposite viewpoint.
Similarly, most metric-containing speeches of CSOs I’ve heard are one-sided and come off as spin. Concern with credibility doesn’t seem a priority.
Patagonia, by contrast, is known as a sustainability leader in part because it advertises its dirty laundry. This has proven useful as credibility-short companies seek them as a partner, and entrepreneurs with a solution to a problem Patagonia has admitted having know to approach them.
In this series, we’ve never charged, with the exception of the LIBOR scandal, that creators and users of metrics are actually lying, explicitly or otherwise. At the risk of a bit of overlap with the book many read in their basic statistics course, Darrell Huff’s How to Lie with Statistics, we recall the opening Disraeli quote: “There are three kinds of lies: lies, damn lies, and statistics.”
However, it’s worth pointing out that lying may not be necessary to do damage. When metrics are used carelessly, as with statistics (which they often embody), you can wind up with accepted results almost opposite, or at least very different, from the actual case.
So this section is not an indictment of metrics but only their superficial interpretation, communications and acceptance. Here are three examples:
1. Kevin Carey questioned the common view about the quality of U.S. colleges in “Americans Think We Have the World’s Best Colleges. We don’t”: “Conventional wisdom has long held that … our colleges and universities are world class. But this view is wrong. When President Obama said, ‘we have the best universities,’ he has not meant: ‘Our universities are, on average, the best’ — even though that’s what many people hear. He means, ‘Of the best universities, most are ours.’ The distinction is important.” An actual testing of “problem-solving in technology-rich environments,” between the U.S. versus international colleges in OECD countries by the Program for the International Assessment of Adult Competencies, found “the U.S. battles it out for last place…”
2. Samuel Culbert writes that “Performance reviews are held up as objective assessments by the boss, with the assumption that the boss has all the answers … In a self-interested world, where imperfect people are judging other imperfect people, anybody reviewing somebody else’s performance … is subjective.” Showing again the behavior-influencing power of metrics can lead to perverse outcomes, “performance reviews corrupt the system by getting employees to focus on pleasing the boss, rather than on achieving desired results. And they make it difficult, if not impossible, for workers to speak truth to power.” Not so good when you need productive dissent to pursue sustainability.
3. Turning to the music scene, Freakanomics authors Levitt and Dubner describe a metrics misperception about why Van Halen singer David Lee Roth was noted for “for his prima-donna excess.” It involved his demand of rock promoters that they provide a specific presentation of “M&M’s” in their dressing room. This was a small part of “a 53-page rider that laid out the technical and security specs for Van Halen concerts. Included in ‘the Munchies’ section was a requirement for ‘M&M’s (WARNING: ABSOLUTELY NO BROWN ONES).” While the press interpreted this as “a typical case of rock-star excess,” as Roth explained it, the reality was quite different: “A great deal of structural support … was necessary to support … safety to ensure that no one got killed by a collapsing stage or a short-circuiting light tower. As a short-hand test about whether the promoters had … followed the instructions, Roth would immediately go backstage to check out the bowl of M&M’s. If he saw brown ones, he knew the promoter hadn’t read the rider carefully…” and there would be security and safety concerns to be addressed. Not so prima-donna-ish after all.
Pitfall 25: Avoid superficial interpretation of metrics that might be telling you the opposite of what is actually true.
Now, certainly some of the above examples could be debated. But metrics may not always be what they seem. So be careful with your interpretations or you might be very certain … but not very right.