AI for Business

Google Reveals State Hackers Weaponize AI for Espionage and Malware

Google's Threat Intelligence Group reported this week that state-sponsored hackers are actively using its Gemini AI model to refine their attacks. The findings illustrate a new phase in cyber...

Share:

Google's Threat Intelligence Group reported this week that state-sponsored hackers are actively using its Gemini AI model to refine their attacks. The findings illustrate a new phase in cyber espionage, where artificial intelligence tools are leveraged for reconnaissance, social engineering, and even generating malicious code.

A North Korean group tracked as UNC2970, linked to the Lazarus Group, used Gemini to research major cybersecurity and defense firms. The AI helped them profile high-value targets by gathering open-source intelligence on specific technical roles and salary information. This data enabled the creation of convincing phishing personas, often posing as corporate recruiters, to infiltrate aerospace and defense sectors.

Google identified several other hacking groups exploiting the tool. Chinese actors like Mustang Panda used it to compile dossiers on individuals and separatist organizations. Another, APT41, employed Gemini to troubleshoot exploit code. An Iranian group, APT42, used it to craft targeted social engineering personas and develop specialized tools, including a Google Maps scraper.

The report also detailed new malware strains powered by Gemini's API. "HONESTCUE" is a framework that queries the API to generate C# source code for a secondary malware stage, which is then compiled and executed directly in memory, leaving no file traces. A separate phishing kit, "COINBAIT," uses AI to impersonate a cryptocurrency exchange.

Furthermore, Google disrupted large-scale "model extraction" attacks on Gemini, where over 100,000 prompts were used in an attempt to replicate the AI's core reasoning abilities. This highlights a growing risk: even proprietary AI models can be reverse-engineered through their public outputs.

"Many organizations assume that keeping model weights private is sufficient protection," said security researcher Farida Shafik. "But this creates a false sense of security. The model’s behavior is exposed through every API response."

Source: The Hackers News

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →