Microsoft says it caught hackers from China, Russia and Iran using its AI tools

State-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets, according to a report published on Wednesday.

Microsoft MSFT.O said in its report it had tracked hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments as they tried to perfect their hacking campaigns using large language models. Those computer programs, often called artificial intelligence, draw on massive amounts of text to generate human-sounding responses.

The company announced the find as it rolled out a blanket ban on state-backed hacking groups using its AI products.

“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – we don’t want them to have access to this technology,” Microsoft Vice President for Customer Security Tom Burt told Reuters in an interview ahead of the report’s release.

Russian, North Korean and Iranian diplomatic officials didn’t immediately return messages seeking comment on the allegations.

China’s U.S. embassy spokesperson Liu Pengyu said it opposed “groundless smears and accusations against China” and advocated for the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind.”

The allegation that state-backed hackers have been caught using AI tools to help boost their spying capabilities is likely to underline concerns about the rapid proliferation of the technology and its potential for abuse. Senior cybersecurity officials in the West have been warning since last year that rogue actors were abusing such tools, although specifics have, until now, been thin on the ground.

“This is one of the first, if not the first, instances of a AI company coming out and discussing publicly how cybersecurity threat actors use AI technologies,” said Bob Rotsted, who leads cybersecurity threat intelligence at OpenAI.

OpenAI and Microsoft described the hackers’ use of their AI tools as “early-stage” and “incremental.” Burt said neither had seen cyber spies make any breakthroughs.

“We really saw them just using this technology like any other user,” he said.

The report described hacking groups using the large language models differently.

Hackers alleged to working on behalf of Russia military spy agency, widely known as the GRU, used the models to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine,” Microsoft said.

Microsoft said North Korean hackers used the models to generate content “that would likely be for use in spear-phishing campaigns” against regional experts. Iranian hackers also leaned on the models to write more convincing emails, Microsoft said, at one point using them to draft a message attempting to lure “prominent feminists” to a booby trapped website.

The software giant said Chinese state-backed hackers were also experimenting with large language models, for example to ask questions about rival intelligence agencies, cybersecurity issues, and “notable individuals.”

Neither Burt nor Rotsted would be drawn on the volume of activity or how many accounts had been suspended. And Burt defended the zero-tolerance ban on hacking groups – which doesn’t extend to Microsoft offerings such as its search engine, Bing – by pointing to the novelty of AI and the concern over its deployment.

“This technology is both new and incredibly powerful,” he said.

(Reuters)