background image

When AI and ESG collide

author image

By Heather Clancy

· 5 min read


Like politics or religion, artificial intelligence is a topic that elicits strong opinions.

Many in the environmental and sustainability communities sing its praises as a technology for combating climate change, citing its superhuman ability to optimize the integration of renewables into electric grids, or to detect deforestation and other threats to biodiversity, or drive corporate resilience planning using extreme weather models. The list of potential applications is long.

I’m definitely guilty of singing that tune. The energy management system developed by cold storage warehouse company Lineage Logistics is one of my favorite examples to extol: When I wrote about it a couple of years ago, the company had managed to cut power consumption in half for facilities where it was deployed, saving customers at least $4 million along the way. What’s not to like?

In fact, it’s unusual to find a big business that isn’t at least thinking about using AI as a resource for automating all manner of tasks that would take homo sapiens far longer to handle manually (if they could handle it at all). At least half the executives surveyed in late 2020 by McKinsey said their companies already use AI for product development, service optimization, marketing and sales, and risk assessments.

Why does this matter for ESG concerns?

The corporate world’s embrace of AI will strain ESG strategies far more deeply than most of us think.

One place where AI will have an outsized influence almost immediately is in reporting. My guess is you’ve already read plenty of articles about how AI-endowed software applications have become central for detecting — and even deflecting — dubious claims. "We can decode what they are saying and telling us," Neil Sahota, an artificial intelligence expert who advises the United Nations on both applications and ethics, told me when we chatted about why these tools have captured so much attention. "Are [companies] really accomplishing what they say they are doing?"

Two resources being embraced by ESG analysts and fund managers for that purpose are ClimateBert, developed by the Task Force for Climate-related Financial Disclosures, and the Paris Agreement Capital Transition Assessment (PACTA), created by 2 Degrees Investing. Both use machine learning algorithms and neural networks to evaluate ESG claims far faster than any human analyst could pull off.

PACTA, along with a sister resource that’s being beta-tested, FinanceMap, were responsible for a recent analysis published by think tank InfluenceMap focused on claims by close to 800 funds with ESG or climate-themed messaging. That analysis found more than half the climate-themed funds included holdings that weren’t aligned with the goals of the Paris Agreement. Given the pervasive concern over greenwashing, you can bet investors and other stakeholders won’t be shy about using such tools to investigate ESG claims.

Of course, these tools can work the other way, too. Software from companies such as Entelligent and Datamaran (and an ever-growing list of vendors) can help corporations get a better handle on their material risks related to climate change and test whether their public disclosures about them would pass muster. You can think of the folks performing these tests as sort of the ESG risk team’s equivalent of "white hats," the name used to describe software hackers who test companies’ cybersecurity defenses by attempting to break them.

AI ethics vs. ESG claims

Reporting and disclosure aside, the corporate world’s embrace of AI — and the processes by which it is governed — is something that will strain ESG strategies far more deeply than most of us currently acknowledge. Multiple factors are in play, including the enormous amount of energy needed to power AI applications, concerns over algorithmic biases that discriminate against minorities and women and questions over privacy and just how much data is collected to inform decisions.

"You could end up with a social equity issue," said Rob Fisher, partner and leader of KPMG Impact, the firm’s ESG-related services division. "If you are using AI to make decisions about people that might cause some disparate impact, how are you governing that? How much information about people is it appropriate to capture? What decisions are we going to let a machine make?"

Two of the biggest companies in tech — Alphabet’s Google and Microsoft — have struggled very publicly with ethics concerns related to how other companies want to use AI. Google turned down a financial services firm that proposed using AI to make decisions about creditworthiness, out of concern that this process would perpetuate discriminatory practices. The company is also feeling a "reputational hit" from its decision to part ways with its well-regarded AI ethics chief in late 2020. Microsoft’s dilemma is more clear cut: It is a big supplier of AI to the oil and gas industry, which uses these insights to inform fossil fuel extraction decisions. That strategy has caused some to question its broader climate strategy.

And then there’s Facebook, which recently found itself apologizing over an "unacceptable error" in which its AI-driven algorithms categorized a video about Black men as being about primates. The list of concerns about its algorithms and the potential harms to societal institutions and mental health — harms it allegedly was well aware of — is under investigation by a Senate subcommittee.

As the corporate use of AI becomes more commonplace, it isn’t just the tech giants that will need to justify the ethics behind how these algorithms make decisions. Apparently, though, that sort of governance is still the exception rather than the rule.

"Despite the costs of getting it wrong, most companies grapple with data and AI ethics through ad hoc discussions on a per-product basis," wrote ethics risk consultant Reid Blackman in a recent Harvard Business Review article. "Companies need a plan for mitigating risk — how to use data and develop AI products without falling into ethical pitfalls along the way."

Microsoft, Google, Twitter and other tech firms highly dependent on AI are assembling ethics teams to address collisions between AI and their ESG agendas. Can you say the same about your company?

This article is also published by GreenBiz. Energy Voices is a democratic space presenting the thoughts and opinions of leading Energy & Sustainability writers, their opinions do not necessarily represent those of illuminem.

Did you enjoy this illuminem voice? Support us by sharing this article!
author photo

About the author

Heather Clancy is an award-winning journalist specialising in transformative technology and innovation. As editorial director for GreenBiz.com, Heather chronicles the role of technology in enabling corporate climate action and transitioning to a clean, inclusive and regenerative economy. Her articles have appeared in Entrepreneur, Fortune, The International Herald Tribune and The New York Times.

Other illuminem Voices


Related Posts


You cannot miss it!

Weekly. Free. Your Top 10 Sustainability & Energy Posts.

You can unsubscribe at any time (read our privacy policy)