Full disclosure: Microsoft (where I work) has invested $10B in OpenAI, which runs ChatGPT. I have no connection with anything having to do with that!
My calendar is a hot mess with companies lining up to talk about sustainability and ESG. But, if there’s one topic that companies love talking about even more as of late, it is ChatGPT. Since ChatGPT hit the internet last November and people started testing it out, it has captured the world’s imagination.
In case you aren’t familiar, per ZDNet:
ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a chatbot. The language model can answer questions, and assist you with tasks such as composing emails, essays, and code.
The reception of ChatGPT on social media has ranged from three basic reactions:
It’s the beginning of the end (‘lights out’ for us per the founder of OpenAI)
It will revolutionize various industries
It doesn’t understand anything (see cow eggs)
The power of ChatGPT will come through in its deployment by companies internally and with stakeholder interfaces. So, let’s look at the corporate opportunity and a non-exhaustive list of ESG considerations for companies around ChatGPT.
To be fair (and not unique), I chatted with ChatGPT on the topic. Therefore, where I’m quoting ChatGPT, it will appear as a blockquote.
Opportunities
At first glance, ChatGPT is akin to an overqualified chatbot. As a result, the same opportunities to manage customer service and serve as virtual assistants exist. However, because ChatGPT can scale up to large datasets, the applicability to research and content creation is staggering. For example, this time of year is when there are trend papers for nearly every industry. Imagine asking ChatGPT for a single five-page document summarizing all of those trend papers or asking it to find connections across medical journals for a specific disease.
The potential for ChatGPT to find connections in large datasets and make sense of data cannot be understated. Still, it will sit with companies and organizations to uncover the best material opportunities for them to pursue with the technology, so I won’t be getting into those here. Instead, check out this TWIMLAI podcast for some great opportunities around generative AI.
Yet, as I’ve written previously, technology is a risk and opportunity that intersects with ESG. So let’s organize a bit around some universal considerations for technologists and boards as they feel pressure to adapt to this new technology.
Environmental
This past week, Kate Crawford (also of Microsoft) crafted an interesting Tweet.
At what point does any benefit of an AI system outweigh its environmental costs? Think about this in the context of Financial Services. The biggest ESG challenges for firms are the breadth of voluntary company disclosures and comparability between companies across the market. ChatGPT could help uncover connections we might not be able to see. If this is an Impact Investment fund, at what point does the carbon cost become an issue for the fund to consider?
One potential environmental risk of scaling up ChatGPT is the potential for increased energy consumption. Language models like ChatGPT require significant computing power to operate, and as such, scaling up their use could potentially lead to an increase in energy consumption and associated carbon emissions.
This aligns with Crawford’s point and needs to be a consideration. With the world’s focus on carbon disclsoures, this connection should be low-hanging fruit. In talking with sustainability experts however, it often is overlooked.
Social
At the intersection of social, there are several considerations for ChatGPT. The first is a well-known risk with AI models, bias. But, unfortunately, there isn’t just one kind of bias either. WEF has a good write-up, but it is not an exhaustive list.
At Microsoft, we’ve aligned around a Responsible AI framework that includes fairness and inclusiveness, reliability and safety, transparency, privacy and security, and accountability. The social element of AI comes into clarity in this statement:
That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
I couldn’t agree more. In asking ChatGPT about the intersection with ESG, it did recognize the risk if its model was integrated into the decision-making process, but it also considered another type of stakeholder impact.
The use of language models like ChatGPT has the potential to automate tasks and processes, which could result in job displacement in certain industries. This creates a social risk, as it could potentially lead to economic hardship for individuals who lose their jobs and face difficulty finding new employment.
This is a big one for employees. But, on the flip side, deploying a ChatGPT tool or any automation tool could also free up certain employees from repetitive tasks, allowing them to focus on more high-value efforts.
Still, there is not only a chance to upend individuals but entire industries. For example, what does it mean for the mobile entertainment and streaming services space, which consumes much of our attention, when one can generate custom entertainment? Today, you can ask ChatGPT to write a story, but imagine how Generative AI could take characters you create and build a show. This leads well into our next topic.
Corporate Governance
A thoughtful approach from leadership to any technology is necessary, mainly because so much of the company's value resides in its intangibles, like technology-built intellectual property (IP). Its use can have positive impacts for the company or negative disruptions, which manifest as opportunities for others.
When asking ChatGPT about the implications of the tool across ESG, the typical response was around augmenting bias, which I felt hit more of the S directly than the G. I pivoted and asked it explicitly about the dangers of assumed authority (i.e., users would believe it instead of having that trust earned).
While social is in the middle of ESG for a reason, they can also be a risk themselves and leaders need to understand this.
This is one reason I chose to pull quotes from ChatGPT, rather than interview it. It can be easy to trust the technology.
The use of language models like ChatGPT in decision-making or policy-making contexts raises concerns about accountability and transparency. If users assume that ChatGPT is speaking with authority, but the model is based on incomplete or biased data, it could lead to decisions or policies that are not in the public interest.
This made me wonder about the accountability of the models and the potential liability of decisions being made. While AI has been used to help inform a board’s decisions (and even inform proxy voting decisions), ChatGPT is a little bit different because of the way it contextualizes the data. It’s less like having a predictive data set and more like having a personal advisor make a convincing argument. Boards need to be wary of its use in this manner to protect themselves, the company, and their stakeholders.
Thinking outside each silo (ESG)
Lastly, I think the environmental aspect provides an excellent example of how ChatGPT could be used to solve a domain-specific challenge but miss the bigger picture. Ironically, I think ChatGPT missed the bigger picture. So I asked it this question:
Are there risks of using ChatGPT to solve a domain-specific problem, for example lowering carbon emissions?
From here, it listed five considerations: bias, data quality, human error, and responsibility and accountability. The one I want to focus on is a category called model limitations, which the tool framed like this:
ChatGPT is an AI model that is trained on data, and as such, it can only provide outputs based on the information it has been trained on. This means that the model may not be able to provide answers to questions or solve problems that it has not been specifically trained on.
ChatGPT got so close to my preconceived notion with this answer but didn’t quite get there. Of course, when solving any business challenge or problem, domain expertise is undoubtedly needed. Still, broader considerations around the issue need to be considered. For example, a company may find one particular supplier is the source of the majority of its Scope 3 emissions and decide to cut that supplier to meet its public commitments. If the company doesn’t choose engagement or investment in that supplier, it could plunge the surrounding community where that supplier operates into poverty.
If we are hurting people as part of our decisions, what gain do we really get?
As companies look to ChatGPT and other Generative AI models to solve domain-specific challenges, there is a massive risk of overlooking the complexities and nuance. This problem grows exponentially when the focus on the issue is so great that we miss HOW problems are solved. In his book Range: Why Generalists Triumph in a Specialized World, David Epstein writes this:
Modern work demands knowledge transfer: the ability to apply knowledge to new situations and different domains. Our most fundamental thought processes have changed to accommodate increasing complexity and the need to derive new patterns rather than rely only on familiar ones.
Note that he carefully states ‘the ability to apply knowledge,’ not ‘data.’
This type of thinking drew me to ESG in the first place. As we start the next revolution around this Generative AI, we have to examine its issues through a complex lens if we’re going to capture the opportunity. If we rush in, we’ll find ourselves missing the opportunity and potentially creating bigger problems than we started.
Further Reading
LinkedIn: Chatting with ChatGPT about Complex Systems (Amy Luers)
Previous Issue: Technology's Impact on Risk and Stakeholders (substack.com)
My exclusive interview with ChatGPT about AI, climate tech and sustainability | Greenbiz