Microsoft says it caught hackers from China, Russia and Iran using its AI tools

State-backed hackers were using OpenAI tools to gather intelligence, in phishing scams and write more convincing e-mails. PHOTO: REUTERS

WASHINGTON - State-backed hackers from Russia, China and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets, according to a report published on Feb 14.

Microsoft said in its report that it tracked hacking groups affiliated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments as they tried to perfect their hacking campaigns using large language models. Those computer programs, often called artificial intelligence (AI), draw on massive amounts of text to generate human-sounding responses.

The company announced the find as it rolled out a blanket ban on state-backed hacking groups using its AI products.

“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – we don’t want them to have access to this technology,” Microsoft vice-president for customer security Tom Burt told Reuters in an interview ahead of the report’s release.

Mr Liu Pengyu, a spokesman for the Chinese embassy in the United States, said it opposed “groundless smears and accusations against China” and advocated the “safe, reliable and controllable” deployment of AI technology to “enhance the common well-being of all mankind”.

‘Early stage’ and ‘incremental’

The allegation that state-backed hackers were caught using AI tools to boost their spying capabilities is likely to underline concerns about the rapid proliferation of the technology and its potential for abuse.

Senior cyber-security officials in the West have been warning since 2023 that rogue actors were abusing such tools, although specifics have been thin on the ground.

“This is one of the first, if not the first, instances of an AI company coming out and discussing publicly how cyber-security threat actors use AI technologies,” said Mr Bob Rotsted, who leads cyber-security threat intelligence at OpenAI.

OpenAI and Microsoft described the hackers’ use of their AI tools as “early-stage” and “incremental”.

Mr Burt said neither has seen cyber spies make any breakthroughs. “We really saw them just using this technology like any other user,” he said.

The report described hacking groups using the large language models differently.

Hackers alleged to be working on behalf of Russia’s military spy agency, widely known as the GRU, used the models to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine”, Microsoft said.

The firm said North Korean hackers used the models to generate content “that would likely be for use in spear-phishing campaigns” against regional experts. Iranian hackers also leaned on the models to write more convincing e-mails, it said, at one point using them to draft a message attempting to lure “prominent feminists” to a booby-trapped website.

The software giant said Chinese state-backed hackers were also experimenting with large language models, for example, to ask questions about rival intelligence agencies, cyber-security issues and “notable individuals”.

Neither Mr Burt nor Mr Rotsted would be drawn on the volume of activity or how many accounts have been suspended. Mr Burt defended the zero-tolerance ban on hacking groups – which does not extend to Microsoft offerings such as its search engine Bing – by pointing to the novelty of AI and the concern over its deployment.

“This technology is both new and incredibly powerful,” he said. REUTERS

Join ST's Telegram channel and get the latest breaking news delivered to you.