Tech leaders once called for regulation of artificial intelligence. The message now is “slow down”

That night I attended a press dinner hosted by an enterprise company called Box. Other guests include the leaders of two data-oriented companies, Datadog and MongoDB. Typically, executives at these evenings are on their best behavior, especially when the discussion is documented, as it was this time. So I was surprised by my exchange with Box CEO Aaron Levie, who told us that he stopped at dessert because he was flying to Washington, D.C., that evening. He’s heading to a special-interest marathon called “TechNet Day,” where Silicon Valley will speed-date with dozens of congressional figures to shape what the (uninvited) public will have to endure. What does he want from this legislation? “As little as possible,” Levi replied. “I will be solely responsible for stopping the government.”

He was joking. Sort of. He went on to say that while regulating clear abuses of AI such as deepfakes makes sense, it’s too early to consider restrictions such as forcing companies to submit large language models to government-approved AI police, or scanning Whether the chatbot has biases or capabilities, etc. Hack real-life infrastructure. He pointed to Europe as an example, which has already introduced restrictions on artificial intelligence. no Do it. “What Europe is doing is quite risky,” he said. “There is a view in the EU that if you regulate first, you create a climate for innovation,” Levy said. “Empirically, this has been proven to be wrong.”

Levie’s comments run counter to the standard stance of Silicon Valley artificial intelligence elites such as Sam Altman. “Yes, police us!” they said. But Levy points out that consensus falls apart when it comes to what exactly the law should say. “As a tech industry, we don’t know what we’re really asking for,” Levie said. “I’ve yet to be in a dinner meeting with more than half a dozen people in AI where there was an agreement on how to regulate AI. A single agreement.” But that doesn’t matter — Levy thinks the dream of a comprehensive AI bill is doomed. “The good news is that it’s impossible for the United States to coordinate in this way. There will be no AI bill in the United States at all.”

Levi is known for his irreverent conversationalism. But in this case, he’s simply more candid than many of his colleagues, whose “please police us” stance is a complicated rope-a-dope act. The only public event at TechNet Day, at least as far as I know, was a live panel discussion on AI innovation, with attendees including Kent Walker, Google’s president of global affairs, and Michael, the new U.S. chief technology officer and now a Google executive. · Michael Kratsios. Expanding artificial intelligence. The feeling among these panelists was that the administration should focus on protecting U.S. leadership in this area. While acknowledging that the technology has its risks, they argue that existing laws barely cover the potential dangers.

Google’s Walker seems particularly alarmed that some states are taking their own AI legislation into account. “In California alone, there are 53 different AI bills pending in the Legislature today,” he said, and he wasn’t bragging. Walker certainly knows that this Congress will have a hard time keeping the government itself afloat, and that the prospects for the House and Senate to successfully handle this hot potato in an election year are as slim as Google rehiring the eight authors of the Transformer paper.

The U.S. Congress does have legislation pending. And the bills keep coming — some may not be as significant as others. This week, Rep. Adam Schiff, D-Calif., introduced a bill called the Generating Artificial Intelligence Copyright Disclosure Act of 2024. The bill requires large language models to submit to the Copyright Office “a sufficiently detailed summary of any copyrighted work used…in their training.” data set. ” It’s not clear what “detailed enough” means. Could it be said “we just crawled the open web?” ” Schiff’s staff explained to me that they were working on a measure in the EU Artificial Intelligence Act.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *