On March 1, 2024, Elon Musk filed a lawsuit against OpenAI in San Francisco. Musk, one of OpenAI's founders, has since resigned from the organization, claiming that ChatGPT's developers violated its original agreement as a nonprofit venture aimed at developing AI for the benefit of humanity. . They argue that OpenAI's deep relationships with companies such as Microsoft allow it to avoid the humanitarian mission Elon Musk claims was originally invested in and instead blatantly pursue profits.
To make matters worse, the failure to fire Sam Altman in November 2023 showed that OpenAI's cautious side was gone. Now, for an organization whose original mission was to advance society through AI, it could very well end up putting society at even greater risk. This is a clear example of a project that went astray and did not take into account the values and considerations of a key stakeholder, Elon Musk.
Organizations and large projects can fail for a variety of reasons. At one extreme, OpenAI may have always had a hidden primary purpose of advancing technology for profit, while proclaiming an altruistic mission to advance society. Founded in 2015, OpenAI's publicly stated motivation was pure: the development of “safe and useful” artificial general intelligence (OpenAI, 2018). But in 2019, a partnership with Microsoft and a massive $1 billion cash injection transformed OpenAI into a hybrid company, a for-profit venture with profits capped at 100 times investment. This will allow the organization's commercial subsidiary, Open AI Global LLC, to legally attract investment from others. This also allows OpenAI to distribute stock to its employees. This is probably essential in the high-tech industry to attract the best talent.
At another extreme, OpenAI may have been largely innocent of these profit motives in 2015. Discovered by a group of technologically visionary stakeholders, including Elon Musk, Greg Brockman, Sam Altman, Ilya Sutskeva, John Schulman, and Wojciech Zaremba, the primary motivation was to collectively Create an environment for the ethical development of intelligence. But midway through this journey, a change occurred. For example, Musk provided much of the initial funding to start the organization under the impression that technology was created for the betterment of society, but in 2018, he announced that he had a lot of potential in his future role as CEO. He resigned due to a conflict of interest. He was working at Tesla, where he was developing AI for self-driving cars.
Additionally, as any technology company can attest, people are the number one ingredient for success. OpenAI is no exception, and when Microsoft decided to invest his $1 billion in 2019, the profit window completely opened. In its response to the lawsuit on March 5, 2024, OpenAI explained the high cost of computing, requiring “billions of dollars annually.” So, as far as Musk is concerned, the original project fell off the drawing board. From OpenAI's perspective, the 180-degree shift in the company's mission appears to be primarily due to day-to-day actions and the reaction of the competitive environment.
The truth is probably somewhere in between. As with many big plans with big companies, motivations vary. It is quite possible that all of the founders were genuinely interested in the rapid development of AI and had a desire to explore ways to benefit humanity. But once the journey begins, the paths and possibilities are endless. The conflict laid out in this lawsuit is not unique to OpenAI, which has a nonprofit mission and a for-profit motive. For example, Goodwill is a for-profit retail operation that sells donated goods. The American Automobile Association (AAA) has a number of commercial subsidiaries that sell insurance and travel services. AARP, the largest member organization in the United States with approximately 38 million members, operates primarily as a nonprofit organization but has many for-profit subsidiaries that provide insurance products and financial services. As the CEO of PMO Advisory, I work closely with nonprofit organizations and have experienced nonprofit employees questioning the revenue motive of development offices. But unlike OpenAI, none of these nonprofits have received a $1 billion infusion from a giant company like Microsoft, creating a close partnership.
Projects can go astray for a variety of reasons. In the case of OpenAI, the impact on our society is already significant, especially with the introduction of ChatGPT. Will the future be darkened by the pursuit of profit, as Mr. Musk claims, brightened by the humanitarian work OpenAI mentions with increased farmer incomes and reduced costs in Kenya and India, or is it more complex? Only time will tell what color it will turn out to be. One thing is certain, at least from the former founder's perspective. Musk, a key stakeholder in early OpenAI, donated more than $44 million between 2016 and 2020 and appears to have helped create one of the biggest competitors in the AI space.
A key lesson for management teams and boards is the need to be open and transparent with stakeholders about the motivations and the need to manage subsequent charter changes. This is why it is so important for organizations to create and maintain updated project charters. This helps align the interests and expectations of key stakeholders. And if the guiding charter states “the best interests of humanity through the development of humanity,'' and an organization's actions and activities differ from that, it can lead to considerable confusion at best, and litigation at worst. may cause.
reference:
“OpenAI Charter: Our Charter describes the principles we use to carry out OpenAI's mission.” “Open AI,” April 9, 2018.
“OpenAI and Elon Musk,” Open AI, March 5, 2024.