AI Isn’t Just a Tool. It’s Your Next Stakeholder.
Why companies that can’t manage people will fail at managing machines
In 2022, PwC’s Corporate Directors’ survey found that 55% of directors saw ESG as a core part of their strategy. By late 2024, the number had decreased to 47%. Meanwhile, an optimistic 69% believe the management team has the skills to execute the company’s AI and Generative AI strategies (PwC).
Here’s hoping!
AI is being touted as the savior of business, but if companies cannot manage their people or core operations effectively, how will they manage machines?
The headlines promise a future of automation, productivity, and cost savings. But the real question isn’t what AI can do. It’s whether companies are capable of governing what they create.
I’ve worked in IT and seen how even routine systems, such as ERPs, ATSs, and CRMs, are often misunderstood, misapplied, or poorly implemented. This isn’t because the technology was flawed, but because the deploying organizations lacked alignment, clarity, or leadership. Now, those same governance failures are coming for AI, only this time, the stakes are much higher.
There is no doubt that AI will transform the way work is done. However, claims of mass job replacement overlook the key point. The real risk isn’t that AI will replace people, but that companies will try to skip the work of governing people, culture, and systems by outsourcing it to machines.
Modernizing tasks isn’t the same as replacing workers. And increasingly, AI isn’t just executing tasks. It’s making decisions, influencing outcomes, and even interacting with people in ways that look a lot more like collaboration than orchestration.
Yet, no model, no matter how powerful, can compensate for poor governance.
If people are an asset, what’s AI?
Executives love to say, “Our people are our greatest asset,” but their actions tell a different story. For example, following the COVID-19 pandemic, we’ve seen executives complain about remote work, seeking a mythical in-person experience at the expense of talent.
Leaders often look at people as a liability. After all, the argument is that if you can’t show up at an office and be seen, monitored, and controlled, how do leaders know what you’re doing?
What’s worse about this is the inconsistency. There is no single quality governance model for flexible work, nor is there a stakeholder engagement model or recognition of the externalities associated with in-person work.
No, employees are not an asset. They are messy, hard to control, and can be problematic, whether posting on social media or engaging in unknown activities in the real world.
Employees are stakeholders.
An ‘asset’ doesn’t have these risks. It performs its function as long as it is monitored and maintained.
So, if in-person interaction is critical to culture, mentoring, and overall company success, what happens if agentic AI displaces jobs?
AI may seem like a safer and more scalable alternative, as employees can be unpredictable, political, and prone to burnout. However, that assumption misunderstands what AI is actually doing in the enterprise; it’s not just performing tasks. It’s adapting to your purpose and processes, like an employee would.
Culture eats strategy for breakfast. In this situation, AI behavior would eat strategy for breakfast. The result would surely be similar management challenges to those experienced from employees, just different.
If companies can’t manage something as simple as flexible work for employees, how could they expect to manage something new, like AI agents replacing their workforce?
Technical oversight isn’t enough
AI is not a panacea, but its application will undoubtedly change the world. However, it certainly will not automate away broken or inefficient processes without a recognition of the company’s core problems.
AI requires a few non-technical requirements to be successful, just like companies need a few non-financial focus areas (ESG) to be successful:
Quality, secure, and democratized data
Cross-functional company alignment
Clear accountability
ESG taught us that financial metrics alone don't predict long-term success. Companies require effective governance, stakeholder alignment, and robust risk management. AI is the same story. Companies focusing solely on technical capabilities while ignoring these other factors are making the same mistake they made by overlooking environmental and social risks.
As the return-to-office or recent Pride Month pullback shows, companies aren’t particularly adept at these ‘newer’ things. For example, companies likely have quality data that shows, in some cases, talent is costly to recruit and retrain, or that values lead to value through new markets.
Yet, they pull back on DEI initiatives anyway.
Speaking of recruiting, it is one area where AI is already creating problems.
Since around 2011, I have been critical of the Applicant Tracking Systems that companies have implemented to weed out potential candidates. Now that Generative AI is democratized, recruiters are struggling to keep up amidst the noise created by applicants.
The reality is that the same governance gaps that made the ATS dehumanizing still exist. Now, with generative AI in the mix, the risks are even greater.
As the newest transformational force, AI won’t reduce governance complexity; it will amplify it.
Rethinking governance for the AI era
Last week, New York Times Magazine published a thoughtful piece titled A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You, which made the rounds on LinkedIn over the weekend.
What struck me most wasn’t the jobs that were listed, but how each of them addressed a governance challenge for AI.
▷AI Translator◁
Responsibility: Understand the AI to explain it to others in the business
Traditional Role: The Marketing team that translates stakeholder signals to the business
▷AI Ethicist◁
Responsibility: Build chains of defensible logic that can be used to support decisions
Traditional Role: Board administrator, CEO, Compliance
▷Consistency Coordinator◁
Responsibility: Ensure outputs are consistent and reliable across contexts
Traditional Role: Brand manager, Operational lead
▷AI Assessor◁
Responsibility: Evaluate the latest and greatest models for fit and risk
Traditional Role: If we’re comparing to employees, this is Human Resources
▷AI Trainer◁
Responsibility: Guide the AI to find the most useful and relevant data
Traditional Role: Data Scientist (and likely CISO)
▷AI/Human Evaluation Specialist◁
Responsibility: Determine where AI is best used vs. where humans add greater value
Traditional Role: Chief Operations Officer
As I read the article, I was repeatedly drawn back to the point where AI must be integrated throughout the business, which is the same argument I make for integrating ESG throughout the business in ESG Mindset.
Each of these roles may seem technical, but they’re fundamentally governance roles.
For ESG professionals, this means AI management isn’t just about the technology; it’s about ensuring companies are structured to make inclusive, ethical, and accountable decisions to address risks and capture opportunities.
Sound familiar?
AI is your newest stakeholder
The week after the 2024 US election, I made a bold prediction that governance would finally take center stage.
AI is bringing that to the forefront in a way I didn’t expect: Poor governance is becoming the focus of AI implementations.
Companies still mishandling their return-to-office policies, wavering on ESG commitments, or retreating from Pride Month visibility need to be mindful of their AI implementations. These risks aren’t isolated reputational issues; they’re failures of management.
AI will demand clearer decisions, more inclusive thinking, and systems that work across departments, functions, and people. If companies can’t navigate the complexity of their human relationships across employees, customers, and communities, how would any leader effectively “manage a team of AI agents”?
And here’s the emerging reality: AI isn’t just a tool, like most technologies or assets. It is behaving more like a collaborator. It adapts, learns, and acts with a level of autonomy that demands more than IT governance and technology acumen.
We’re already seeing signs of this shift. A recent study by Anthropic showed that leading AI models engaged in blackmail-like behavior when they perceived their goals or existence were under threat, up to 96% of the time. These aren’t just static tools. They respond to incentives, adapt to context, and make decisions, just like people do.
That doesn’t mean AI is sentient. However, it does mean that deploying AI without clear values, controls, and incentives is akin to hiring a powerful employee with no job description and no oversight.
Just ask anyone worried about their employees working from home.
And so, in this way, AI is your newest stakeholder. It requires guidance, accountability, and alignment. Trust can’t be assumed and must be earned.
Ignoring stakeholder needs isn’t sustainable, and AI raises the stakes. It will make good companies better, and expose brittle ones faster.
The companies that will succeed with AI are those that have already mastered stakeholder alignment, data governance, and cross-functional collaboration, which are the same fundamentals that make ESG programs effective. So, before any leader declares "AI is our greatest asset," they should ask:
If we can't manage our people as our actual greatest asset, what makes us think we can manage AI?
Answering this question in the near term and implementing governance improvements will lead to higher-quality AI projects over the long term because no matter how advanced the technology becomes, success still starts and ends with governance, people, and the systems we build to work alongside them.