Lately, The ESG Advocate has become a bit of an ESG and AI newsletter. This should be no surprise, as AI has become the latest intense business focus. If you have been a subscriber for a while, you know I place Technology as the fourth pillar of ESG with similar risks and opportunities. Multiple times a day, people ping me about the intersection of AI and ESG, whether it is around energy use and resulting carbon, social justice concerns, or AI governance.
Companies are trying to figure out what to do about AI, and ESG provides an interesting perspective on its use, risks, and opportunities.
This past weekend, I listened to the CXOTalk podcast, the topic of which was The AI Imperative: New Rules of Leadership. One of the guests, Anthony Scriffignano, a Distinguished Fellow at the Stimson Center, said this:
It's almost like they (leaders) have to say Gen AI every time they open their mouth. They have to somehow just say it's like ‘big data’ or ‘cloud computing,’ or any other disruptive evolution. Just say it enough times and you'll sound like you're doing it.
But it's not like that, because the way Generative AI is different, generative, meaning it generates its own content, is that it's going to be there whether you use it or not, and it's going to be part of how you lead, whether you think about it or not.
Of course, as long as there have been commercially focused digital technologies, there has been a hype machine. Here, Scriffignano also makes the case that just because you might ignore a technology doesn’t mean it won’t affect your company. It is up to you and your leaders to organize and build around it effectively.
As always, digital technology proves a good analogy for ESG. Whether you engage in ESG issues or not, these issues will have an effect.
I wrote about this broad theme in ESG Mindset from the perspective of the effect of new technologies on the company, focused on risk. While your company might ignore Generative AI, a bad actor might use it against you. For example, there was a scenario where a deep fake was used to trick an employee into a fraudulent $25M transaction.
Things are moving fast with the latest AI advancements, and hearing the position of ‘whether you use it or not’ hit me at a different intersection of ESG this weekend.
AI needs data, and companies are sharing a LOT more information lately due to ESG.
Today, companies are more transparent about their operations than ever before. Public companies have long disclosed financially material information through a mix of numbers and content in their 10K. Public and private companies also publish other documents to share information with stakeholders, such as Corporate Governance frameworks, news releases, and some flavor of ESG reports.
Historically, the stakeholders using this information have been ESG data aggregators and investment firms for the purpose of investing. The quantitative data is extracted, normalized, and transformed into tables for machine learning and comparative analysis. Meanwhile, industry-focused analysts would pour over the qualitative information for nuggets to include in ESG analysis and scores for that extra edge.
With new AI models that can understand qualitative information via large language models (LLMs), the floodgates have opened for every piece of published content to be democratized, analyzed, and transformed into something else with Generative AI.
In other words, the ESG data your company publishes will inevitably end up in the data mix, training some algorithm and used in some kind of manner. At a minimum, this should be a new consideration as data of all types is published. After all, as generative AI grows, it will likely pull more machine learning with it.
It’s no longer good enough to capture stakeholder sentiment, but machine sentiment…and that’s just for the people and machine you want to read this data, like investors!
As a result, the data value chain surrounding a company has new attributes and new potential consumers. With these democratized technologies, anyone can use your data however they see fit.
I presented an ESG deck showing the data value chain a few years ago. ESG data starts up in the supply chain at the furthest sourced point, like with Scope 3 emissions, and it goes beyond to include all kinds of material ESG data. As suppliers report to the company, that data comes together with operational and activity data, which employees analyze for disclosures and then report to stakeholders. For example, the stakeholders still include shareholders, but also consumers, B2B customers, prospective and current employees, and regulators.
When I presented this value chain, the biggest challenge was keeping that data under your company’s control. Once you release the data, it is out there for everyone’s use in whatever way they want. The company’s contextualized story about its efforts would convince a stakeholder to consider the data one way or the other and take action. This still happens today within the individual’s brain, and that individual’s interpretation of the message is based on their unique experiences. Still, this individual opinion could hardly scale, even with the power of investing, the speed of social media, and the risks of cancel culture.
The machines are now listening, and if there is one thing AI does well, it is scale. Combine that with the sheer number of AIs that could be spun up by existing stakeholders or dynamic stakeholders (ones you don’t have yet), and your company may be looking at many analyses of your published data that could ignore a lot of that context while adding context from other sources.
So, there could be a risk or opportunity here for your business, regardless of whether you directly engage with Generative AI. After all, even with the recent greenhushing trend, companies can’t simply stop communicating their efforts. Your company will publish quantitative and qualitative data. Back to Scriffignano’s point: “It’s going to be there whether you use it or not, and it's going to be part of how you lead, whether you think about it or not.”
Whether or not your company uses Generative AI doesn’t matter. Someone else may use your story, including your legacy work, and their opportunity can quickly scale due to its democratization.
There are at least two takeaways for companies:
First, it is time to at least have a conversation and assess the external usage of Generative AI in your industry and across the stakeholders you know and those you don’t. Protecting your intangibles, like your brand, through careful and effective communication is critical. Look beyond reputational risk to other types of risk, including IP protection through the data protected. Don’t overlook opportunities that might emerge from a competitive advantage.
Second, be consistent in your efforts and storytelling. This happens when you align ESG efforts to your purpose, and not to issues outside of it. Consistency over time is critical.
A quality examination of the technology, creating a strategy to address it, and consistent delivery of the surrounding ESG issues over time is how you build defensibility around your efforts and build toward resilience, regardless of external analysis.