Governance must keep pace with innovation
Regardless of regulations, some things the company can control
We are now months removed from the board drama at OpenAI. Still, in case you missed it, it was an excellent example of governance tension at work. It may also point to bigger questions about the nature of digital innovation, purpose, profit, governance, and regulation.
Purpose and Profit
OpenAI’s management structure is distinct. A non-profit board is responsible for ensuring “that artificial general intelligence (AGI) benefits all of humanity.”
What OpenAI is doing today with ChatGPT and other Generative AI models is impressive. However, when we reach AGI, machine intelligence will surpass human intelligence, and we’re not quite there yet. It is going to take a lot of investment to get to AGI.
And so, OpenAI’s non-profit and mission-locked governance structure, which was in place from the beginning, shifted when OpenAI needed to raise capital and create a for-profit subsidiary to pursue AGI. The non-profit board retained control to safeguard the company’s mission.
This unique structure was announced in a blog post from 2019, which repeatedly states that the mission and company charter come first.
In one section, the company’s charter states (emphasis added):
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
So, it stands to reason that OpenAI recognizes the safety and human concerns of AGI development and has built a governance structure to match. The tension between the pursuit of AGI, the safety concerns, and the introduction of profit is an interesting mix. While B-Corps do this through official statements and explicit declarations of mission-locked intent, OpenAI uses this governance model instead.
Late last fall, board members privately expressed concerns about the CEO, Sam Altman, and acted to remove him using this governance structure. Employees' (think stakeholders) support for Altman at the time was swift and vocal, and he was reinstated. After, two board members, Helen Toner and Tasha McCauley, left. This May, another board member and co-founder, Ilya Sutskever, left. Some AI researchers who recently left expressed concerns as well, resulting in the creation of a new safety team led by Altman.
With all this uncertainty and the backdrop of a for-profit entity seemingly bumping into a mission-locked board, many questions are being asked about whether the company can stay true to its mission while pursuing profits. Toner and McCauley have publicly stated:
…we believe that self-governance cannot reliably withstand the pressure of profit incentives.
The WSJ asked even more questions about OpenAI’s CEO’s investment in AI companies in yesterday’s article “The Opaque Investment Empire Making OpenAI’s Sam Altman Rich.” The article called out Altman’s private investments in companies and potential conflicts of interest and concluded with a quote from Altman.
The way we plan to deal with this is with full disclosure and leaving decisions about how to manage situations like these up to the board.
It seems like we were just at this point with the board in November of last year. With only one original concerned board member left, maybe that’s not the case.
Ultimately, OpenAI is a unique example of leading with a purpose (benefit of humanity) and a goal (AGI), but finding it couldn’t be achieved without high volumes of capital. While B-Corps pursue purpose and profit in tandem, many companies are not dealing with the billions of dollars that OpenAI suddenly finds itself with or the intense world-changing opportunity it represents.
The speed of technology dictates the speed of governance
I like Klaus Schwab’s framing that technology has ushered in a new age for humanity, the Fourth Industrial Revolution. Schwab wrote a book about this in 2017, but with the rise of AI, is it already out of date? It seems that micro-revolutions are happening quickly in this age, from technology’s underpinning of business in the late 1990s to its ubiquitous presence after the iPhone in 2007, through the cloud in the 2010s, and now to Generative AI models and quantum computing with AGI on the horizon.
These latest technologies and the march to AGI represent a significant shift into the unknown. Their application, built on previous technologies like the Internet and the cloud, makes these algorithms incredibly accessible and democratized.
With questions about self-governance surrounding the development of these tools, is regulation part of the answer?
Over the past few weeks, two former OpenAI board members, Toner and McCauley, have resurfaced in the news. First appearing as co-authors of an article in The Economist, “AI firms mustn’t govern themselves, say ex-members of OpenAI’s board.” The Ted AI Show also interviewed Helen Toner on related topics.
Both point out the critical role that regulation needs to play in AI:
But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.
Even in a counter-piece written by two current OpenAI board members, Bret Taylor and Larry Summers, we find agreement:
…we share Ms Toner’s and Ms McCauley’s view—and the company and Mr Altman have continually stated—that the evolution of ai represents a major development in human history. In democratic societies, accountability to government and government regulation is essential.
The refrains for AI regulation range also come from companies like Microsoft (where I work, if you forgot), Max Tegmark, and the CEO of Y Combinator. Still, there is a difference between regulating the development of a technology and regulating its use.
Regulation has always played an essential role in how companies manage and use developed technologies. In 1996, the US Congress passed the Communications Decency Act (CDA) to regulate indecency on the Internet, perhaps the closest analog to the unknown we now face. There were provisions to protect service providers from published user content in Section 230. Primarily, this regulation kept the Internet unregulated, allowing for innovation. The paper “Digital Tornado: The Internet and Telecommunications Policy” defended the approach and the FCC’s light touch:
In the area of telecommunications policy, the Federal Communications Commission (FCC) has explicitly refused to regulate most online information services under the rules that apply to telephone companies. Limited government intervention is a major reason why the Internet has grown so rapidly in the United States. The federal government's efforts to avoid burdening the Internet with regulation should be looked upon as a major success, and should be continued.
This particular perspective was published in 1997. However, this statement is still valid…at least for the Internet. That light touch has let the internet emerge into what it is today, both the good and the bad. In this case, regulating the use of the technology allowed it to grow.
Other regulations have intersected with digital technology, including SOX and GDPR, and companies might create policies around its use, but I can’t think of regulating the development of the technology. So, let’s linger on the Internet and the CDA.
Current AI capabilities and quantum computing are similar to the Internet in some ways and differ in others.
Both had/have the potential to completely transform our world.
Both had/have risks and opportunities for companies, citizens, and governments.
Both had/have democratized access to powerful capabilities.
Companies struggled with whether or not to embrace the technology or not as the potential was/is undefined.
Yet, the Internet was created by academia, standardized by the government, and accelerated through the private sector. That acceleration took decades to unfold.
While academia and governments play a role in AI, only the private sector can bear the costs of developing new AI models. Individual technology companies control AI’s development.
…and things are moving way faster with AI than they did with the Internet. Hence, new regulations are emerging and the calls for more are growing.
For example, some regulations control the use of AI, like the EU AI Act and the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Yet, both call for the same thing at this point from companies using AI: governance.
This should be unsurprising, as we can only come up with something akin to Asimov’s Laws of Robotics before tipping over into AGI.
While regulations may help drive governance models, existing laws might have more of an impact until regulators can see AI’s application. For example, Generative AI models, built on mountains of internet data, will likely soon face copyright protection laws. Meanwhile, we don’t know what’s coming next.
So, how could the development of AI be regulated? Perhaps a closer oversight might be warranted?
It’s about the pace
One thing seems inevitable. As we continue to innovate around new digital technologies, the pace is accelerating. The pace is outside of the company’s control. Even if a company ignores new technology, it might find itself affected by competitors and stakeholders who adopt or want to adopt it.
However, governance is in the company’s control, regardless of what the regulatory bodies do. If your board or management team isn’t focused on these issues in some way, the opportunities to capture that innovation will quickly turn into risks.
Without careful self-governance and board oversight, a company will misstep in this fast-paced world of innovation, creating trust issues and ultimately damaging profit. That risk grows in line with the opportunity new technologies represent.
And so, I might propose a new ESG axiom:
Technology’s pace is increasing, and governance must keep up with innovation.