AI for Business

Google Report Confirms AI Tools Are Now Standard in the Cybercriminal Arsenal

A new report from Google’s internal threat analysts confirms a significant shift in the cyber underworld. The period of testing generative AI is over; it is now a standard tool for attackers....

Share:

A new report from Google’s internal threat analysts confirms a significant shift in the cyber underworld. The period of testing generative AI is over; it is now a standard tool for attackers. According to the Google Threat Intelligence Group, both state-backed operatives and profit-driven criminals are systematically using AI, including Google’s own Gemini model, to launch faster and more effective attacks.

The research identifies hacking groups linked to Iran, China, North Korea, and Russia as the most active in this space. These actors employ AI for tasks like probing for software weaknesses, writing convincing phishing messages in multiple languages, and refining malicious code. Iranian groups showed particularly heavy use, while Chinese teams focused on U.S. infrastructure reconnaissance. North Korean operatives used the technology to draft fake job application materials, supporting their campaign to infiltrate IT workers into foreign companies.

Beyond espionage, the criminal economy has adopted AI for efficiency. Ransomware gangs and fraudsters use it to automate social engineering, tailor scams for specific industries, and quickly modify malware to avoid detection. This compresses the time needed to launch an attack, altering the fundamentals of cybercrime.

A key finding details attempts to bypass AI safety features. Adversaries persistently try to 'jailbreak' models like Gemini with cleverly worded prompts, aiming to generate harmful content or disguise malicious code. Google reports continuously updating its defenses against these methods.

The report also warns of AI's role in propaganda, with state groups generating disinformation and fake social media personas at scale. For businesses, the message is clear: defensive strategies must evolve. Training should address AI-generated phishing, which often lacks traditional tell-tale errors, and security teams should consider defensive AI tools to match the new pace of threats. Google states it is strengthening its AI safeguards and sharing intelligence with partners, highlighting a collective challenge for the tech industry as powerful models become more accessible.

Source: Webpronews

Ready to Modernize Your Business?

Get your AI automation roadmap in minutes, not months.

Analyze Your Workflows →