This week, partners in Climate Action Against
Disinformation (CAAD) — a coalition of over 50 leading
climate and anti-disinformation organizations demanding robust, coordinated and
proactive strategies to deal with the scale of the threat of climate
misinformation and disinformation — released a
report
that maps the risks that artificial
intelligence
(AI) poses to the climate crisis.
While sustainability-focused organizations are embracing AI in their efforts to
do everything from eliminate
greenwash
from their communications and food
waste
from their operations to creating ethical, transparent supply
chains
and getting a leg up on other common sustainability
challenges;
caveats around AI’s explosive growth were a major topic of discussion at the
most recent UN Forum on Human
Rights;
and concerns over the intertwined risks of AI-driven misinformation and
disinformation
topped the World Economic Forum’s 2024 Global Risks Report.
“The skyrocketing use of electricity and water, combined with its ability to
rapidly spread disinformation, makes AI one of the greatest emerging
climate-threat multipliers,” said Charlie
Cray, Senior Strategist at
CAAD member Greenpeace USA. “Governments and companies must stop pretending
that increasing equipment efficiencies and directing AI tools towards
weather-disaster
responses
are enough to mitigate AI’s contribution to the climate emergency.”
Among the topline red flags presented in the CAAD report:
-
AI systems such as
ChatGPT
require an enormous amount of energy and water to operate — in 2023,
Alphabet/Google chairman John Hennessy
told Reuters
that each new AI search query requires 10 times the energy costs of a
traditional Google search — and consumption is expanding quickly:
Statista
expects the global artificial intelligence market size to show an annual
growth rate (CAGR 2024-2030) of 15.83 percent, resulting in a market volume
of US$738.80bn by 2030 — with the US to be the largest market (US$106.50bn
in 2024).
-
OK, Now What?: Navigating Corporate Sustainability After the US Presidential Election
Join us for a free webinar on Monday, December 9, at 1pm ET as Andrew Winston and leaders from the American Sustainable Business Council, Democracy Forward, ECOS and Guardian US share insights into how the shifting political and cultural environment may redefine the responsibilities and opportunities for companies committed to sustainability.
Deepfakes generated by generative
AI technology will
present new challenges in maintaining credible public discourse and
democracy. Generative AI has the potential to turbocharge climate
disinformation — including climate change-related deepfakes — ahead of a
historic election year where climate policy will be central to the debate.
-
The current AI policy
landscape reveals a
concerning lack of regulation on the federal level, with minor progress made
at the state level — relying on voluntary, opaque and unenforceable pledges
to pause development or provide safety with its products.
“AI companies spread hype that they might save the planet; but currently, they
are doing just the opposite,” said Michael
Khoo, Climate Disinformation
Program Director at CAAD member Friends of the Earth. “AI
companies risk turbocharging climate disinformation; and their energy use is
causing a dangerous increase to overall US consumption, with a corresponding
increase of carbon emissions.”
Previously, the coalition submitted letters to President Joe
Biden
and Senator Chuck
Schumer
that call on them to implement climate concerns into proposed AI legislation.
The letters echo recommendations made in the report, including:
-
Transparency: Building on the SEC’s recent climate-disclosure
mandate,
companies must publicly report on energy usage and emissions produced,
assess any environmental-justice concerns related to developing AI
technology, and disclose how their AI models produce information in a way
that prioritizes climate science.
-
Safety: Companies must be able to publicly demonstrate the safety of
their products for users and the environment. In addition, governments
should develop standards on AI safety reporting and invest in research that
maps the risks AI poses to the spread of climate disinformation.
-
Accountability: Governments should enforce rules on investigating and
mitigating the climate impacts of AI with clear, strong penalties for
noncompliance. Companies and their executives must be held accountable for
any harms that occur from use of their products.
“The evidence is clear: the production of AI is having a negative impact on the
climate. The responsibility to address those impacts lies with the companies
producing and releasing AI at a breakneck speed,” said Nicole
Sugerman, Campaign
Manager at Kairos Fellowship. “We must not
allow another ‘move fast and break things’ era in tech; we’ve already seen how
the rapid, unregulated growth of social media platforms led to previously
unimaginable levels of online and offline harm and violence. We can get it right
this time — with regulation of AI companies that can protect our futures and the
future of the planet.”
Get the latest insights, trends, and innovations to help position yourself at the forefront of sustainable business leadership—delivered straight to your inbox.
Sustainable Brands Staff
Published Mar 12, 2024 8am EDT / 5am PDT / 12pm GMT / 1pm CET