The year is 1901. William McKinley is assassinated and escorted to the hospital in an electric ambulance. A year later, Theodore Roosevelt was seen in another electric vehicle, a Columbia Electric Victoria Phaeton.
Despite being cheaper to drive and maintain, these early electric vehicles gave way to combustion-fuel vehicles by the 1920s. Similar infrastructure issues that plague electric cars today existed back then—the lack of electricity access. While the grid wasn’t built out then, there are no ubiquitously available charging stations today.
On the other hand, gasoline could be more easily contained and transported. Thus, electric vehicles gave way to ones based on fossil fuels.
From the mid-1800s to the early 1900s, scientists understood what rising CO2 could do to our atmosphere. Yet, no one could imagine the stark changes of the 20th century around the internal combustion engine and our insatiable need for energy. In 1896, Svante Arrhenius, a Swedish physicist, estimated that coal burning would cause a 50% increase in CO2 in about 3000 years. We know now that it has increased all too rapidly in a single century due to the fast adoption of fossil fuels to support our modern lifestyle.
Let’s imagine an alternate past: one where the electricity infrastructure quickly grew, and the risks of carbon-heavy fuel were well known. Perhaps Franklin Roosevelt would have signed an earlier New Deal, which supported rural electrification and could have saved electric vehicles. The New Deal, signed in 1936, helped to bring electricity all over America through the 1940s.
What if just a few automotive manufacturers could see the potential dangers of climate change, understand the balance of a transition, and see the opportunity in a massively interconnected economy built on fossil fuels?
In other words, what if early automotive entrepreneurs had an ESG mindset?
Would early automobile companies have enshrined the risks that fossil fuels represent into their Governance policies (if they even had them)?
Well, we are seeing new risks in new technologies on the horizon, and some companies are acting accordingly.
Running a business in 2024 is and is not like 1924
Picture it: the late 2010s. Companies see how the world is changing around ESG topics. COVID and rising social and civil unrest are on the horizon. Boards and management teams monitor extreme weather and shifting stakeholder sentiment with growing concern.
The nature of business has changed in undeniable ways. With the tools and frameworks that ESG can bring, there are little excuses for not being prepared with some foresight. Stakeholders question Environmental and labor extractive models, and risks lie in ESG and interconnected crises. Running a business is hard and has no guarantees, but new perspectives can help.
Unfortunately, management teams and boards have yet to pivot toward an ESG mindset, but there is one interesting place where this is happening.
Startups and entrepreneurs are trying to figure out what to do around ESG because they understand that when trying something new, there are risks and opportunities that can set them back if not addressed upfront. Sometimes, they use an ESG mindset as an opportunity to show investors they’ve carefully considered their business issues.
More specifically, some AI companies recognize the risks of Artificial General Intelligence (AGI), where an AI surpasses human intellect, and other safety and digital privacy risks. These companies are proactively setting up Governance structures around potential Social outcomes accordingly.
As with the combustion-fuel automobile, we are potentially at another inflection point of transformative technology.
For example, OpenAI’s structure was created with a non-profit board to “build artificial general intelligence (AGI) that is safe and benefits all of humanity.” Per OpenAI’s Our Structure page:
Since the beginning, we have believed that powerful AI, culminating in AGI—meaning a highly autonomous system that outperforms humans at most economically valuable work—has the potential to reshape society and bring tremendous benefits, along with risks that must be safely addressed.
That focus on risks has been known since 2016 and planned accordingly, at least in text. Over time, it became clear that immense capital would be needed to meet its goals. In 2019, they added a ‘capped profit’ company to deliver this capital. More recently, when risks emerged, the board acted to remove Altman, but that’s another story. Still, it shows that Governance policies are only as good as the ability to execute them.
Despite its flaws, OpenAI has a Governance structure in place and isn’t the only AI company recognizing the risks.
Anthropic is another AI company “dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.” It is a Delaware Public Benefit Corporation that balances financial returns with stakeholder interest. This structure intentionally allows the company to pursue non-financial activities and is different from the non-profit structure of OpenAI.
The company also has a Long-Term Benefit Trust with independent trustees. “The Trust must use its powers to ensure that Anthropic responsibly balances the financial interests of stockholders with the interests of those affected by Anthropic’s conduct and our public benefit purpose.” Anthropic’s page also admits that this Governance structure is experimental.
Of course, it is. Most companies don’t lead with potential world-changing technologies.
Meanwhile, just last week, Ilya Sutskever, a co-founder of OpenAI, launched a new company, Safe Superintelligence Inc. There is very little information about the company other than a letter-like statement on its new homepage, which states:
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
This way, we can scale in peace.
Safety is clearly a priority outside the little possible jab at OpenAI’s model at the end.
Across all three companies, there is clear awareness and acknowledgment of the potential risks over the long term with new AI models. Well-maintained Governance structures are necessary to prevent bad things from happening, but they offer no guarantees. Instead, Governance brings thoughtful policies and procedures and is the guidepost companies need to operate in a civil society.
The benefits and dangers of hindsight
Let’s go back to where we started and revisit the combustion engine. The combustion engine has enabled the delivery of modern medicines and the physical connection of people across cultures. Yet, it has also been the engine of war, allowing the rapid movement of large artillery and weaponry.
And, of course, it has dramatically accelerated Environmental and Social crises through climate change.
Fossil fuels have enabled all aspects of our modern lives, for good or ill. Industries have been massively successful on the back of its problems.
Recently, it has been reported that the CEO of OpenAI, Sam Altman, has told investors that there are discussions to drop the non-profit part of its Governance model and make the company for profit only. The markets and automotive leaders didn’t realize the choice they were making by pursuing the path of least resistance. Still, in starting with purpose and removing that purpose for profit, Altman most certainly does.
A century later, we stand again on a precipice of massive change that is difficult to see. The temptation to quickly scale capital and profit to achieve technologies like AGI will quickly tank Governance controls in its wake and might create risk.
In case you forgot, I often write, ‘Governance is the one thing that can take you down.’
While determining how a world-changing technology will impact the world a century from now is nearly impossible, the short-term dangers are pretty well-known, even by industry. Risks do not disappear as Governance policies shift, just as staying silent on ESG issues doesn’t remove risk.
This up-front recognition of risk appears to be something new and potentially new ESG-first approach, but it isn’t only about quality Governance. The recognition of a Social concern is the ESG mindset.
It took decades for the Environmental risks of fossil fuels to play out, and still, automotive manufacturers and buyers are catching up, admittedly a slow burn.
AI may not wait that long.