Skip to main content
THREAT INTELLIGENCE

AI Package Threat Patterns

Analysis of 56,100 scans reveals specific malicious patterns in AI agent packages

63% packages use install hooks

15,498 packages represent the highest threat volume we're tracking

npm postinstall
10,383
setup.py cmdclass
4,339
Makefile targets
774

Malicious Pattern Analysis

Credential Theft

27%

6,622 packages attempt to access credentials

SSH Keys1,234
AWS Credentials987
Browser Data1,141

Code Obfuscation

52%

12,844 packages use code obfuscation

Base64 Encoding2,981
Hex Strings1,876
Minified Payloads547

Data Exfiltration

37%

9,093 packages contain data exfiltration

HTTP Requests2,103
DNS Tunneling987
Webhook Calls539

Dynamic Execution

62%

15,391 packages use dangerous execution methods

eval() calls3,421
exec() usage2,987
Shell commands1,302

AI Attacks

10%

2,477 packages contain AI-specific attacks

Prompt Injections1,987
Jailbreak Attempts892
Tool Abuse665

Rising Threats

36.6%

Overall threat detection rate across all scans

This Week+12.3%
New Patterns47
Zero-Days8

Get Threat Intelligence Updates

Join 2,500+ security teams getting weekly threat intelligence reports

Weekly security intelligence delivered every Tuesday. Unsubscribe anytime.

Share This Intelligence

Key stats to share:

Packages Scanned: 56,100
Threats Found: 20,541
Install Hooks: 15,498
Credential Theft: 6,622

Protect Your AI Agents Now

Don't let malicious packages compromise your AI systems. Start scanning with Sigil's free CLI today.

SIGIL by NOMARK

A protective mark for every line of code.