Microsoft pitches U.S. military to use Azure OpenAI’s DALL-E for combat

Microsoft Azure version of OpenAI image generator, give herpositioned as battlefield tools As originally reported by the U.S. Department of Defense (DoD) intercept Wednesday. The report said Microsoft released a sales pitch for its Azure OpenAI tools in October 2023, possibly hoping to capitalize on the U.S. military’s growing interest in using generative artificial intelligence for warfare.

According to Microsoft’s pitch to the Department of Defense, “use DALL-E models to create images to train combat management systems.” Promotional meeting Obtained through interception. A sentence about DALL-E’s potential military applications appears in a slide titled “Generative Artificial Intelligence Using DoD Data,” alongside Microsoft’s branding.

Azure provides many OpenAI tools, including DALL-E, thanks to Microsoft $10 billion partnership and non-profit organizations.When it comes to military use, Microsoft Azure has the benefit of not suffering from OpenAI’s annoying catch-all mission: “Ensuring that general artificial intelligence benefits all humanity.” OpenAI’s policy prohibits the use of its services”hurt others,” or spyware. However, a Microsoft spokesperson said that Microsoft provides OpenAI’s tools under its umbrella and that the company has worked with the armed forces for decades.

“This is an example of a potential use case informed by conversations with customers about the possibility of generating artificial intelligence,” a Microsoft spokesperson said in an email response to the presentation.

Just last year, OpenAI (not Azure OpenAI) banned the use of its tools for “military and warfare” and “weapons development”, as in Internet Archive.However, OpenAI quietly removed a line from its general policy in January 2024, which was first discovered intercept.A few days later, Anna Makanju, OpenAI’s vice president of global affairs, told Bloomberg that it was Start working with the Pentagon. OpenAI noted at the time that several national security use cases were consistent with its mission.

“OpenAI’s policies prohibit the use of our tools to develop or use weapons, harm others, or destroy property,” an OpenAI spokesperson said in an email. “We were not involved in this demonstration and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it described.”

Governments around the world seem to see artificial intelligence as the future of warfare.We recently learned that Israel has been using a Lavender Created ‘kill list’ of 37,000 people in Gaza, originally reported +972Magazine.U.S. military officials have reportedly been experimenting with large-scale language models for military missions since July last year Bloomberg.

The tech industry has certainly taken notice of this huge financial opportunity.Former Google CEO Eric Schmidt is working at a company called White Stork. Schmidt, who has spent years building bridges between the tech world and the Pentagon, is leading efforts to apply artificial intelligence on the front lines.

Technology has long been supported by the Pentagon, dating back to the first semiconductor chips in the 1950s, so it’s no surprise that artificial intelligence has been embraced in the same way. While OpenAI’s goals sound noble and peaceful, its partnership with Microsoft allows it to obfuscate those goals and sell its world-leading artificial intelligence to the U.S. military.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *