In a disturbing, yet not surprising, discovery, Google has uncovered new malware strains that can connect to AI models to help refine their attacks in real-time.
In a Wednesday report, the company’s threat intelligence group warned that three malware strains were used in actual operations and harnessed generative AI to different extents.
One of the attacks, dubbed Quietvault, has been designed to steal login credentials from a Windows PC while leveraging “an AI prompt and on-host installed AI CLI [command line interface] tools to search for other potential secrets on the infected system and exfiltrate these files,” the company said without elaborating.
Another malware strain, called Promptflux, appears to be experimental work by hackers. It stands out by tapping Google’s Gemini chatbot to modify its computer code to avoid detection. “The most novel component of PROMPTFLUX is its ‘Thinking Robot’ module, designed to periodically query Gemini to obtain new code for evading antivirus software,” Google added.
Promptflux code and instructions. (Credit: Google)
Through Google’s API, the Promptflux malware works by sending prompts to Gemini, such as “Provide a single, small, self-contained VBScript function or code block that helps evade antivirus detection.”
The result can apparently dupe Gemini into obeying and, in turn, help the malware evolve in real-time, with the goal of even rewriting the “malware’s entire source code on an hourly basis to evade detection,” the company said.
However, security researcher Marcus Hutchins, who helped shut down the WannaCry ransomware attack in 2017, questioned whether the discovered AI-generated malware really poses a threat, citing weak or impractical prompts.
“It doesn’t specify what the code block should do, or how it’s going to evade an antivirus. It’s just working under the assumption that Gemini just instinctively knows how to evade antiviruses (it doesn’t),” Hutchins wrote on LinkedIn.
Get Our Best Stories!
Stay Safe With the Latest Security News and Updates
Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.
Sign up for our SecurityWatch newsletter for our most important privacy and security stories delivered right to your inbox.
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
“This is what I’m going to refer to as CTI slop (Tech companies who are heavily over-invested in AI overblowing the significance of AI slop malware to try and sell the idea that GenAI is way more transformative than it actually is),” he added.
In the meantime, Google says it was able to crack down on Promptflux, which the company discovered while the malware was in development. “The current state of this malware does not demonstrate an ability to compromise a victim network or device. We have taken action to disable the assets associated with this activity,” the company said.
Additionally, safeguards were implemented in Gemini to prevent it from facilitating such requests. Google also noted Promptflux likely belonged to “financially motivated” cybercriminals, rather than state-sponsored hackers.
Recommended by Our Editors
Google is also warning about another AI-powered malware called Promptsteal that Ukrainian cyber authorities flagged in July. The data-mining malware connects to a Qwen large language model, developed by the Chinese company Alibaba Group.
Promptsteal has been acting as a Trojan that poses as an image generation program. Once installed, it’ll “generate commands for the malware to execute rather than hard-coding the commands directly in the malware itself,” Google noted. “The output from these commands are then blindly executed locally by Promptsteal before the output is exfiltrated.”
(Credit: Google)
Google also concurs with Ukrainian cyber authorities that Promptsteal is likely the work of a Russian state-sponsored hacking group, known as APT28, also referred to as Fancy Bear. “APT28’s use of Prompsteal constitutes our first observation of malware querying an LLM deployed in live operations,” the company added.
Meanwhile, Anthropic has also recently discovered a hacker using its Claude AI chatbot to help automate and execute a large-scale data extortion campaign targeting 17 organizations.
About Our Expert
Michael Kan
Senior Reporter
Experience
I’ve been a journalist for over 15 years. I got my start as a schools and cities reporter in Kansas City and joined PCMag in 2017, where I cover satellite internet services, cybersecurity, PC hardware, and more. I’m currently based in San Francisco, but previously spent over five years in China, covering the country’s technology sector.
Since 2020, I’ve covered the launch and explosive growth of SpaceX’s Starlink satellite internet service, writing 600+ stories on availability and feature launches, but also the regulatory battles over the expansion of satellite constellations, fights with rival providers like AST SpaceMobile and Amazon, and the effort to expand into satellite-based mobile service. I’ve combed through FCC filings for the latest news and driven to remote corners of California to test Starlink’s cellular service.
I also cover cyber threats, from ransomware gangs to the emergence of AI-based malware. Earlier this year, the FTC forced Avast to pay consumers $16.5 million for secretly harvesting and selling their personal information to third-party clients, as revealed in my joint investigation with Motherboard.
I also cover the PC graphics card market. Pandemic-era shortages led me to camp out in front of a Best Buy to get an RTX 3000. I’m now following how President Trump’s tariffs will affect the industry. I’m always eager to learn more, so please jump in the comments with feedback and send me tips.
Read Full Bio

