Welcome to the land of the ungoverned.
I’ve criticized sustainability disclosure regimes plenty. But when done right, regulation protects companies from their worst instincts: short-termism, opportunism, and irresponsibilism (let’s see if that word catches on).
Yet, in the US, policies and agencies are being hollowed out and gutted, with the White House tossing the keys to management teams to self-govern.
We’ve seen how this plays out.
The case against self-governance
The only pillar of ESG that can completely tip a company over has proven to be governance. Despite all the agita and attention around environmental and social issues, governance failures have collapsed companies and threatened the global economy, and it largely happened under self-governance with little oversight.
Enron and WorldCom are shining examples of reputation-driven stock successes, fueled by advisory firms and financial sectors. But their massive accounting scandals reflect failed accountability. In this case, a government policy followed, with new financial auditing rules through SOX controls.
A few years later, in 2008, the financial sector almost collapsed, and a few banks failed due to investments in high-risk securities, leading to Dodd-Frank and rules around capital, leverage, and risk.
Even the Canadian bread industry has had its fair share of governance issues, recently settling for $500M in a price-fixing scheme among its peer group.
It isn’t that companies can’t be trusted, but when it comes to financials, or what you might call bread, those ‘isms’ end up leading against the perceived slow, steady, and reliable growth that self-governance can bring.
And yet, we believe somehow that where self-governance in the corporate and banking worlds has failed, it just might finally work with artificial intelligence.
The ESG blind spot: AI self-governance is riskier
AI is moving faster than many ESG issues; it’s challenging to manage, let alone audit, and is less understood by boards than financial engineering ever was.
Governance and oversight play a critical role in AI. The board and management team must understand AI’s role, risks, and opportunities to ensure safety and responsible operations. They must empower business units and technologists with the skills and tooling to continuously measure, map, and mitigate issues and develop contingency plans, like rollbacks. After all, a misstep could result in breaking trust with stakeholders.
But, are they keeping up or getting distracted by non-material issues?
Bizarrely, the environmental pillar of AI is getting the most attention today. Since NYC Climate Week, the flood of worry about AI’s impact on carbon emissions has been nonstop, driving some companies to shelve projects entirely.
No, I’m not kidding.
What fascinates me is that, when weighed against the potential business benefits and stakeholder risks, the environment isn’t really a material AI concern for most businesses.
Now, stay with me, because I’m not making a case against sustainable AI operations.
This past week, Sasha Luccioni, research scientist and climate lead at Hugging Face, penned an opinion piece that We still don’t know how much energy AI consumes. In the opening, she states this:
Already, emissions by data centres needed to train and deliver AI services are estimated at around 3 per cent of the global total, close to those created by the aviation industry.
Excellent point, but here’s the rub. Most companies’ business travel (aviation), reported under GHG Protocols’ Scope 3, Category 6, is not material. So why do we expect AI to be any different?
Yes, of course, it is essential to understand the emissions of any new technology or venture, as the MIT Technology Review recently did. Still, we can’t overlook the more material social aspects.
Many companies will deploy AI tools to engage with stakeholders, as people are the center of the work and its effects. This includes use cases for agentic AI (where AIs interact with each other). AI-driven outcomes squarely sit with stakeholders in the social pillar.
While many have shown their adeptness in understanding AI’s energy, carbon, and water usage, few have proven they comprehend accessible models, diversity considerations, and equitable outcomes.
And this isn’t limited to those exploring AI. Those leading the charge struggle, too.
Just this year, Facebook pulled its AI-character accounts after stakeholder backlash, and last week, Grok was interjecting claims about ‘white genocide’ on xAI.
AI ethics and Responsible AI seem to be throwbacks of the early 2020s and corporate net-zero goals: primarily marketing fluff.
While it’s taken us years to integrate environmental concerns into our business (and some still don’t), we’ve sidelined social accountability, which may be the more material concern, at least for companies leveraging AI. As a result, the externalities of bias, harm, and exclusion remain ignored.
The Mirage of Responsible AI
Responsible AI can be found everywhere if you go looking for it.
The accountability? Not so much.
Companies have a toolbox of various publicly available Responsible AI frameworks, but like the expression, “What gets measured, gets managed,” there’s an assumption that someone will take action. Just because frameworks exist doesn’t mean they will be implemented well or that the company has the domain expertise to understand the depth.
A Responsible AI framework without a quality governance model is like setting a net-zero goal without a materiality assessment. It’s directionless at best, and worthless at worst.
Yet, we are pushing companies toward this loose and unclear self-governance model, and again, it comes back to the theme we explored last week between regulatory pullback and the belief it hinders innovation.
The EU, famous for its legislative prowess, enacted the EU AI Act in 2024. The Act aims to regulate AI models through risk analysis of the models, with some models banned entirely, and others with varying levels of oversight, based on risk.
Surprisingly, the US just enacted the bipartisan Take It Down Act, which makes it a federal offense to post revenge porn, and requires websites to remove deepfakes or other content at the request of the victim. While a critical piece of legislation, we need more than reactive steps.
And a proactive approach isn’t coming.
While Europe builds safeguards and the US dips its toes in reactive safeguards, the US is really busy building escape hatches for unchecked innovation. This past week, the “big, beautiful” tax bill, which is now making its way through the Senate, has a provision to prevent states from regulating AI for 10 years.
…no state or political subdivision may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.
I doubt this section intends to promote self-governance, so much as it is a stipulation to mitigate the fear of slow US innovation against China and others. Regardless, the result across corporate America will be no governance through self-governance. This bill will introduce these externalities if it passes, but whether companies realize it or not depends on how well they manage their business.
Real AI governance or show?
Where the US government might not be paying attention, investors are. In December 2024, FTI Consulting published a study on proxy resolutions filed around AI and their support levels. The report shows a small number of resolutions filed and mixed results overall, but still represents progress.
Side note: I’d caution filers to keep these (and all) resolutions material and not based on values. There are real business risks at stake here. Be mindful of the framing.
Meanwhile, the board is also stepping up. Their oversight of AI has increased “more than 84% in the past year, and over 150% since 2022, with significant increases across every industry,” according to a recently published report by Subodh Mishra of ISS STOXX.
But…
That same study found that “Of the AI oversight functions studied, AI Ethics Board had the least growth, with the prevalence almost flat since 2023.”
So, boards claim to ‘oversee’ AI. But what does that mean if they’re not asking about algorithmic bias, stakeholder consent, or downstream harms?
Ethics should be table stakes for any self-governance model, and accountability must live far beyond the boardroom. We need an ESG mindset embedded throughout the organization: from the C-suite to business units to data scientists at the keyboard.
If boards are going to claim oversight as an effective governing body, we need more than a handful of board briefings and frameworks that push accountability to IT. We need clarity in roles, risk ownership, and escalation protocols.
A useful model to follow is the Institute of Internal Auditors’ Three Lines Model. This is a framework that separates accountability into 3LoD (lines of defense):
First line: Operational management, which owns and manages risk (your solution, data, and engineering teams).
Second line: Risk, compliance, and ESG/ethics functions, which support and monitor (risk management layer).
Third line: Internal audit and independent assurance, reporting to the board (if no auditing is conducted due to a lack of AI regulations, create and seek advice from a review board or stand up internal audit procedures along a Responsible AI framework).
In practice, this could mean product teams building responsibly (first), governance or ethics councils providing guidance and tools (second), and an internal review board asking the uncomfortable questions and continually monitoring (third). All of these have oversight through the governing body.
For a deeper dive on the 3LoD model and AI, check out this 2023 paper by Jonas Schuett, Senior Research Fellow @GovAI.
Most companies are barely operating with a first line right now, let alone a second or third.
AI is moving fast. And trust, once broken, doesn’t get fine-tuned. It gets lost.
That’s your outcome, and it’s a material concern, requiring the right level of oversight.
Every model, tool, and automated decision carries stakeholder risks intersecting with reputation, the law, and operations. If you think you’re just rolling out software, think again. You're shaping behavior, driving choices, and delivering stakeholder outcomes at scale.
…and this is for good or ill, weighed against deploying at AI’s scale and competitive pace.
What’s the value of that AI use case if it alienates your workforce, erodes customer trust, or draws media fire?
Self-governance is better than no governance. But it's not a strategy.
And when it fails, which it will here and there, we’ll look back at these years the same way we now remember the early 2000s, before SOX. Or 2008, before Dodd-Frank. Or even, yes, $500M later, while we were too busy chasing the bread.