Microsoft and OpenAI say hackers are using ChatGPT to improve cyber attacks

Microsoft and OpenAI revealed today that hackers are already using large language models like ChatGPT to refine and improve their existing cyber attacks. In newly published research, Microsoft and OpenAI found that groups backed by Russia, North Korea, Iran, and China attempted to use tools like ChatGPT to research targets, improve scripts, and help build social engineering techniques.

“Cybercriminal groups, nation-state threat actors, and other adversaries are exploring and testing different artificial intelligence technologies, trying to understand the potential value of their operations and the security controls they may need to circumvent,” Microsoft said in a blog post today.

The Strontium Group, which has links to Russian military intelligence, was found to have used the LL.M. “to learn about satellite communications protocols, radar imaging technology and specific technical parameters”. The hacking group, also known as APT28 or Fancy Bear, has been active during Russia’s war in Ukraine and was previously involved in targeting the 2016 Hillary Clinton presidential campaign.

The team has also been using LLMs to help with “basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing to automate or optimize technical operations,” Microsoft said.

A North Korean hacking group called Thallium has been using LL.M.s to research publicly reported vulnerabilities and target organizations to help complete basic scripting tasks and draft content for phishing campaigns. Microsoft said an Iranian group called Curium has also been using LLM to generate phishing emails and even code to avoid detection by antivirus applications. Chinese state hackers also use LL.M.s to conduct research, script writing, translation, and refinement of existing tools.

There have been concerns about the use of artificial intelligence in cyberattacks, especially with the emergence of artificial intelligence tools such as WormGPT and FraudGPT to assist in the creation of malicious emails and cracking tools. A senior NSA official also warned last month that hackers were using artificial intelligence to make their phishing emails look more convincing.

Microsoft and OpenAI have not identified any “significant attacks” using LLM, but the companies have shut down all accounts and assets associated with these hacking groups. “At the same time, we believe this is important research that needs to be released to uncover the early, incremental actions we observe attempted by known threat actors and to share information with the defender community on how we can deter and counter them,” Talk about Microsoft.

While the use of artificial intelligence in cyberattacks appears to be limited at the moment, Microsoft did issue a warning about future use cases like voice impersonation. “AI-driven fraud is another key issue. Speech synthesis is one example, where three seconds of speech samples can train a model to sound like anyone,” Microsoft said. “Even something as innocuous as a voicemail greeting can be used to get enough samples.”

Naturally, Microsoft’s solution is to use AI to deal with AI attacks. “Artificial intelligence can help attackers increase the sophistication of their attacks and they have the resources to conduct them,” said Homa Hayatyfar, principal manager of detection analytics at Microsoft. “More than 300 threats we track at Microsoft We see this among participants and we use AI to protect, detect and respond.”

Microsoft is building Security Copilot, a new artificial intelligence assistant designed for cybersecurity professionals to identify vulnerabilities and better understand the vast amounts of signals and data generated by cybersecurity tools every day. The software giant is also overhauling its software security after a major attack on its Azure cloud and even Russian hackers spying on Microsoft executives.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *