The wild claims at the heart of Elon Musk’s OpenAI lawsuit

Elon Musk started the week with a testy comment on X about his Difficulty setting up a new laptop running Windows. He ended the lawsuit by filing a lawsuit accusing OpenAI of recklessly developing human-level artificial intelligence and handing it over to Microsoft.

Musk’s lawsuit targets OpenAI and two of its top executives, CEO Sam Altman and President Greg Brockman, both of whom are related to the Rocket. and automotive entrepreneurs co-founded the company in 2015. A large part of the case revolves around a bold and questionable technology. Claim: OpenAI develops so-called artificial general intelligence (AGI), a term commonly used to refer to machines that can comprehensively match or surpass humans.

The case alleges that Altman and Brockman violated OpenAI’s original “founding agreement” with Musk, which committed the company to develop AGI publicly and “for the benefit of humanity.” Musk’s lawsuit alleges that the company’s for-profit arm, which was formed in 2019 after he parted ways with OpenAI, instead created AGI without proper transparency and licensed it to Microsoft, which has invested in the company billions of dollars. It requires OpenAI to be forced to release its technology publicly and prohibits it from using it to financially benefit Microsoft, Altman or Brockman.

“From information and belief, GPT-4 is an AGI algorithm,” the lawsuit says, referring to the large language model behind OpenAI’s ChatGPT. It cited studies that found the system could achieve passing scores on the Uniform Bar Examination and other standardized tests as evidence that it surpasses some basic human abilities. “GPT-4 is not just capable of reasoning. It is better at reasoning than the average person,” the lawsuit states.

Although GPT-4 was hailed as a major breakthrough when it was launched in March 2023, most artificial intelligence experts do not consider it evidence that AGI has been achieved. “GPT-4 is general, but it’s certainly not AGI in the way people typically use the term,” said Oren Etzioni, an emeritus professor at the University of Washington and an expert on artificial intelligence.

“This would be considered a crazy claim,” Christopher Manning, a Stanford University professor who specializes in artificial intelligence and language, said of the AGI claim in Musk’s lawsuit. Manning said there are different views in the AI ​​community on what constitutes general artificial intelligence. Some experts may lower the bar, arguing that GPT-4’s ability to perform multiple functions is enough to justify calling it AGI, while others prefer to reserve the term to refer to anything that does more than most or all Algorithms that make people smarter. “By that definition, I think we clearly don’t have general artificial intelligence, and we’re really far away from it,” he said.

Limited breakthrough

GPT-4 has gained attention, and OpenAI’s new customers, because it can answer a wide range of questions, while older AI programs often focus on specific tasks like playing chess or labeling images. Musk’s lawsuit cites Microsoft researchers’ assertion in a paper published in March 2023 that “given the breadth and depth of GPT-4’s capabilities, we believe it can reasonably be considered an early stage of AI generals (but Still incomplete) version of the “intelligent (AGI) system. ” Although GPT-4’s capabilities are impressive, it still makes mistakes and has significant limitations in its ability to correctly parse complex problems.

“I have a feeling that most of us researchers think that large language models [like GPT-4] are a very important tool that allows humans to do more, but they are limited in certain ways that keep them away from independent intelligence. ” added Michael Jordan, a professor at the University of California, Berkeley, and an influential figure in the field of machine learning.

Jordan added that he prefers to avoid using the term AGI entirely because it’s too vague. “I’ve never seen Elon Musk say anything about artificial intelligence that was calibrated or grounded in research reality,” he added.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *