Unlock New Opportunities for Thought Leadership with SB Webinars

Bridgestone's Sustainability Hub:
A Dialogue on Solving Survey Fatigue, Part 2

How do companies deal with mushrooming sustainability data requests coming in from all directions – raters, investors, B2B customers? In Part 1 of this dialogue, Bridgestone Americas’ Director of Environmental Affairs, Tim Bent, discussed the company’s Sustainability Hub, developed with the help of PivotGoals project manager Jeff Gowdy to interface Bridgestone’s environmental, social, governance and economic data with incoming questionnaires.

How do companies deal with mushrooming sustainability data requests coming in from all directions – raters, investors, B2B customers? In Part 1 of this dialogue, Bridgestone Americas’ Director of Environmental Affairs, Tim Bent, discussed the company’s Sustainability Hub, developed with the help of PivotGoals project manager Jeff Gowdy to interface Bridgestone’s environmental, social, governance and economic data with incoming questionnaires.

Gowdy and Bent are considering opening up the Hub for others to use, and have invited others to participate in a collaborative inquiry, in a virtual ThinkTank. Click here to join the ongoing dialogue.

Bill Baue: This brings us to the point of focusing on the findings — when you looked at all the information being requested, what trends did you discover?

Tim Bent: Several key trends emerged:

  • Wide Variability: The metrics requested by raters and customers is vastly variable. Each outside inquirer was asking for a different set of metrics — so each time we added a new “Source” (from either a Rater or Customer), less than half of their required metrics were in the Hub — so we consistently were adding over 50% of the metrics from each Source. Now that we have 25 Sources we’re seeing more convergence but there are now 864 metrics in the Hub, from 25 Sources. We started with 12 Sources that yielded 420 metrics.
  • Expansive scope: The questionnaires include far more than environmental information (note: I am the Director of Environmental Affairs for BSAM). They spanned social, governance, product, operations, supply chain, and other issues. We needed teammates from 10 different company departments including HR, Procurement, and Marketing to get the answers needed for the requested metrics. This is not just an environmental area, it is business-wide and it affects many aspects, including crucial things like RFPs, follow-on Customer surveys and inquiries, market analyst inquiries for indices (e.g DJSI), rankings and awards (e.g. Newsweek Green Rankings) and voluntary protocols (e.g. CDP).
  • Intricacy of Requests: Nearly everyone wants information and data on carbon emissions and waste produced/recycled but they usually ask for it in slightly different ways. This is troublesome in providing an explicitly accurate response and points to the need for a standard — from our perspective, if only for purposes of efficiency in navigating this maze of reporting.

Baue: What issues surfaced as the focus of most attention?

Bent: A couple:

  • Quality: Keeping up the "answers" to each of the metrics in the Hub is a daunting task. We haven't answered all 864 metrics; we decided to answer the ones that appear in a majority of the Raters / Customer inquiries. The first year the initial filter yielded 81, as I previously described. The second pass yielded 124. Then we annually review and update any information that changes, e.g. environmental impact metrics like Scope 1 emissions.
  • Categorization: We have attempted to overlay an "ESG and F" (Environmental, Social, Governance, and Financial) categorization of the metrics so as to self grade our reporting and see where we need to improve. But we’ve found it very difficult. For example, is ISO14001 compliance an "E" or a "G"? Is ensuring suppliers pay fair wages an "S" or an "F"? This is another aspect that would benefit from further convergence and standardization.

Baue: And what is not covered by these surveys that you think should be?

Bent: The main one — that we identified to you and Andrew Winston, and which was presented and discussed at Sustainable Brands last fall at MIT — is the lack of science-based goals in the Surveys, but I defer to Jeff Gowdy on that one.

Jeff Gowdy: Thanks, Tim. Bill, the analysis that we did in September 2014 was to compare the Bridgestone Sustainability Hub metrics from Customer Surveys against the definitions for "Science-based goals" from PivotGoals.com. Pivot Goals is a project of Winston Eco-Strategies that I have helped manage for the past three years. It is a compilation of the global Fortune 500's publicly available sustainability (ESG) goals across 29 different ESG categories, e.g. GHG Emissions.

The Pivot Goals definition of "Science-based" is "based on scientific knowledge of how human impacts on vital capitals in the world can affect the status of such resources. In the case of impacts on natural capitals, in particular, they also reflect an understanding of the fact that such capitals are limited and in many cases under threat." In other words, explicitly aligned with the best currently available science for that metric. An example "Science-based" goal is Hitachi's "Reduce annual CO2 emissions by 100 million tons for Hitachi products/services based on the IPCC goals of 80% reduction in carbon by 2050."

Pivot Goals also defines "Science-equivalent," which is indirectly in line with the best available science but without any technical or explicit reference to science. An example "Science-equivalent" goal is Walmart's "Be supplied by 100% renewable energy"

For a reference point, of the global Fortune 200's 1,621 goals in eligible categories, only 155 were "science equivalent" and only 10 were "science-based" (data from 9/14).

Upon reviewing the 12 Customer Surveys in the Bridgestone Sustainability Hub**,** there are no science-based metrics requested. I also re-reviewed all metrics in the Bridgestone Sustainability Hub that are from Customer Surveys and found none that are definitively science-equivalent. One could make a specious argument for several of the metrics but, again, none are definitive in my opinion.

Finally, I compared the metrics requested from Raters in the Hub against the aforementioned definitions of "Science-based" and did not find any that fit. However, the metrics for Raters were pulled over the past three years and some are up to three years old (because no new modifications have emerged, in some cases). And when I pulled these metrics, I often edited them down for conciseness. So, there is a possibility that some of the Raters' metrics are indeed Science-based, but I just can't say that with certainty. I will be working on updating two of the Raters' metrics in the Hub over the next month and can weigh in more on those, as we continue this conversation.