Deltadga

Cybercriminals Exploit Hugging Face and ClawHub in New Social Engineering Campaign

Cybercriminals use Hugging Face and ClawHub to distribute malware via social engineering, tricking developers into downloading malicious AI models and code snippets. Experts urge zero-trust practices and sandboxing.

Deltadga · 2026-05-03 16:33:47 · Cybersecurity

Breaking: Trusted AI and Code Platforms Weaponized for Malware Distribution

Threat actors are actively abusing Hugging Face and ClawHub—widely used platforms for machine learning models and code repositories—to distribute malware through sophisticated social engineering tactics. The campaign lures victims into downloading files that appear legitimate but contain hidden malicious instructions.

Cybercriminals Exploit Hugging Face and ClawHub in New Social Engineering Campaign
Source: www.securityweek.com

Urgent Warning Issued

Cybersecurity firm Recorded Future issued an alert Tuesday, warning that the attacks have already compromised multiple organizations. "These platforms are trusted by developers and researchers worldwide, making them ideal vectors for targeted attacks," said Dr. Elena Torres, senior threat analyst at Recorded Future.

Victims are typically approached via email or direct messages on professional networks, where attackers pose as collaborators offering pre-trained models or code snippets. Once downloaded, the files execute payloads that steal credentials, install backdoors, or exfiltrate sensitive data.

Background: How the Attack Works

Hugging Face is the leading hub for open-source AI models, while ClawHub is a popular code-hosting service similar to GitHub. Both platforms allow users to upload and share files freely. Attackers upload malicious Python packages or model weights disguised as harmless utilities.

According to malware analysis from CrowdStrike, the malicious files often include obfuscated code that bypasses basic antivirus scans. "The attackers are using PyTorch and TensorFlow model files as carriers, embedding malicious logic in the serialized data," explained Marcus Chen, principal researcher at CrowdStrike.

Social engineering is the primary delivery method. Attackers create fake profiles with credible histories, referencing real research projects to build trust. They then request that victims download a "helper script" or "updated model" from their Hugging Face or ClawHub repositories.

What This Means for Developers and Organizations

This campaign highlights the growing risk of trusting code and models from unverified sources on collaborative platforms. Developers should treat every download from Hugging Face and ClawHub with the same scrutiny as files from unknown email attachments.

"The security community must shift from implicit trust to a zero-trust model for all third-party code," urged Chen. "Even if a repository has stars and positive reviews, it can still be weaponized."

Organizations are advised to immediately implement sandboxed execution environments for any files obtained from these platforms. Additionally, teams should monitor for unusual outbound network connections from machines that recently downloaded AI models.

Cybercriminals Exploit Hugging Face and ClawHub in New Social Engineering Campaign
Source: www.securityweek.com

Expert Recommendations

  • Verify the authenticity of any Hugging Face or ClawHub account before downloading files through cross-referencing with known publications.
  • Use tools like ModelScan or PickleGuard to inspect serialized model files for suspicious code before execution.
  • Enable network segmentation and endpoint detection logging for all systems that interact with AI development environments.

Ongoing Investigation

SecurityWeek has learned that at least three separate threat actor groups are running similar campaigns simultaneously. The groups have been active since early February 2025, but the scale of the abuse has only recently been detected.

"We are working with Hugging Face and ClawHub to identify and remove malicious repositories," said Dr. Torres. "However, the attackers are quick to recreate accounts and update their payloads to evade detection."

Users who suspect they may have downloaded malicious files should immediately disconnect the affected machine from the network, run a full forensic scan, and change all passwords stored on that device. Do not attempt to analyze the file without proper containment.

Broader Implications

This incident underscores a systemic vulnerability in the open-source AI ecosystem. As enterprises rush to adopt generative AI and machine learning, the supply chain of pre-trained models becomes an attractive target. Similar attacks have previously targeted PyPI and npm registries, but Hugging Face and ClawHub represent a new frontier because the models themselves can be weaponized.

The attack also raises questions about platform responsibility. Neither Hugging Face nor ClawHub currently scans uploaded files for malicious code with sufficient granularity to detect these embedded threats. SecurityWeek contacted both companies for comment but did not receive a response by publication time.

Recommended