Summary
headroom-ai v0.5.2 was classified as CRITICAL RISK with a risk score of 759. Sigil detected 75 findings across 381 files, covering phases including provenance, network exfiltration, code patterns, obfuscation, credential access, install hooks. Review the findings below before installing this package.
Package description: The Context Optimization Layer for LLM Applications - Cut costs by 50-90%
v0.5.2
20 March 2026, 21:12 UTC
by Sigil Bot
Risk Score
759
Findings
75
Files Scanned
381
Provenance
Findings by Phase
Phase Ordering
Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.
install-makefile-curl
HIGHMakefile/script pipes remote content to shell
headroom_ai-0.5.2/tests/test_integrations/agno/test_model.py:1049
To run these tests locally:
1. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
2. Pull a small model: ollama pull tinyllamaWhy was this flagged?
A script or Makefile pipes content from a remote URL directly into a shell (curl | sh or wget | bash). This is inherently dangerous because the remote content can change at any time, and the command runs with the current user's permissions. Rated HIGH because it requires manual execution (unlike install hooks) but still executes arbitrary remote code.
install-makefile-curl
HIGHMakefile/script pipes remote content to shell
headroom_ai-0.5.2/tests/test_integrations/langchain/test_chat_model.py:707
To run these tests locally:
1. Install Ollama: curl -fsSL https://ollama.com/install.sh | sh
2. Pull a small model: ollama pull llama2Why was this flagged?
A script or Makefile pipes content from a remote URL directly into a shell (curl | sh or wget | bash). This is inherently dangerous because the remote content can change at any time, and the command runs with the current user's permissions. Rated HIGH because it requires manual execution (unlike install hooks) but still executes arbitrary remote code.
Badge
Markdown
[](https://sigilsec.ai/scans/80B6F0C9-8278-4115-AA89-5114BD531F22)HTML
<a href="https://sigilsec.ai/scans/80B6F0C9-8278-4115-AA89-5114BD531F22"><img src="https://sigilsec.ai/badge/pypi/headroom-ai" alt="Sigil Scan"></a>Run This Scan Yourself
Scan your own packages
Run Sigil locally to audit any package before it touches your codebase.
Early Access
Get cloud scanning, threat intel, and CI/CD integration.
Join 150+ developers on the waitlist.
Get threat intelligence and product updates
Security research, new threat signatures, and product updates. No spam.
Other pypi scans
Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.