Researchers at Check Point Research (CPR) have uncovered a novel technique where cybercriminals utilize popular AI platforms like Grok and Microsoft Copilot to orchestrate covert attacks.
This method transforms benign AI web services into proxies for Command and Control (C2) communication.
By leveraging the web browsing and URL-fetching capabilities of these assistants, attackers can tunnel malicious traffic through legitimate enterprise channels, making detection significantly more difficult for security teams.
Abusing AI Web Interfaces as C2 Proxies
The core of this attack relies on the “AI as a proxy” concept. Unlike traditional malicious integrations that require API keys or registered accounts, this technique abuses the public web interfaces of AI agents.
Malware installed on a victim’s machine uses an embedded browser component, such as WebView2 on Windows, to interact with the AI without user visibility.
The malware sends a prompt instructing the AI to fetch a specific URL controlled by the attacker.
According to CheckPoint, the AI retrieves the data, which may contain hidden commands or payloads, and relays the response back to the infected host.
In a proof-of-concept demonstration, researchers created a fake website dedicated to Siamese cats that functioned as a hidden C2 server.
They successfully instructed Copilot and Grok to visit the site and retrieve hidden commands embedded in the HTML.
Because the traffic appears as legitimate requests to reputable AI domains, standard firewalls and security monitoring tools often allow it to pass unchecked.
This bidirectional channel enables data exfiltration and command execution without triggering alerts associated with known malicious IP addresses or requiring the attacker to authenticate with the AI provider.
The Rise of AI-Driven Malware Logic
This research highlights a dangerous shift toward AI-Driven (AID) malware. Beyond simple communication, future threats may use AI models as runtime decision engines.
Malware could leverage these models to analyze the infected environment dynamically, distinguishing between real workstations and security sandboxes.
This allows the threat to remain dormant during analysis and only execute malicious actions when high-value targets are confirmed.
By offloading decision-making to external AI models, attackers can automate the triage process.
An implant could collect system artifacts, user roles, and network data, then query an AI model to determine the optimal next steps.
This evolution moves malware away from static, hardcoded logic toward adaptive behavior that is harder to predict and signature.
As AI services become deeper integrated into corporate workflows, the distinction between legitimate user activity and malicious AI-driven traffic will become increasingly blurred.
