WilmerHale’s investigation, which involved reviewing more than 30,000 documents and interviewing dozens of people, found that four board members accurately described their reasons for firing Altman, citing his lack of candor with the board. The report found that they did not anticipate that firing Altman would “destabilize the company,” according to a Friday blog post from OpenAI. OpenAI released only a summary of the findings, not the full report.
“WilmerHale also found that the previous board’s decision was not motivated by concerns about product safety or security, the speed of development, OpenAI’s finances, or its statements to investors, customers or business partners,” OpenAI’s summary said. “Rather, it was is the result of a breakdown in the relationship and loss of trust between the previous board and Altman.”
OpenAI said the investigation found the board acted on its concerns in a “reduced time frame” without advance notice to key stakeholders, a full investigation or an opportunity for Altman to address.
new oversight
Desmond-Hellmann served on Facebook’s board of directors from 2013 to 2019, before resigning to focus on his responsibilities at the Gates Foundation. Her relationship with Microsoft co-founder Bill Gates could help steer the partnership for the company, which has pledged $13 billion to OpenAI. After Altman was ousted last year, Microsoft CEO Satya Nadella complained that he was surprised by the move. Microsoft did not immediately respond to a request for comment.
Seligman brings experience operating in the media and entertainment industry to the OpenAI board, which could be beneficial as OpenAI is battling numerous lawsuits filed by content publishers accusing it of plagiarizing their content to develop systems like ChatGPT.
Simo’s previous work at Facebook included overseeing its video projects and managing the company’s main mobile application for several years. She also serves on the board of directors of Shopify, one of the leading e-commerce software providers.
OpenAI board chairman Bret Taylor, who recently started his own generative artificial intelligence company, said at a news conference Friday that additional governance changes will help as the board grows to seven members. To better manage this non-profit organization. These include a new corporate governance code, a new and enhanced conflict of interest policy and a whistleblower hotline, he said. He added that the board will continue to expand and has established a number of new committees, including one called “Mission and Strategy.”
OpenAI’s nonprofit arm has noted in regulatory filings for years that its governance documents and conflict rules are available for public inspection, but the policy changed after last year’s drama when Wired asked to see the documents . The policy change was cited by Elon Musk, who helped found OpenAI but is no longer involved with it, when he sued the ChatGPT maker last week for allegedly violating its mission.
OpenAI did not immediately respond to a request for comment Friday on whether it would issue new and updated policies. When asked for more details about the update, Taylor told reporters, “I’m not an expert on this. A lot of our policies are public documents. I’m not sure what is and what isn’t? So I’m sorry, I can’t right now Give you a good answer.”
Additional reporting by Steven Levy.
Update March 6, 2024, 7:20 PM ET: This article has been updated to include additional material from the OpenAI press conference.
Updated March 6, 2024 at 6:40pm ET: This article has been updated to include new details about OpenAI’s new board members and the company’s announcement today.