Summary
smith-ai v4.1.0 was classified as CRITICAL RISK with a risk score of 2769. Sigil detected 251 findings across 142 files, covering phases including provenance, network exfiltration, code patterns, obfuscation, install hooks. Review the findings below before installing this package.
Package description: Advanced AGI AI Agent Framework - Multi-Method Reasoning, HTN Planning, Metacognition, World Model, Theory of Mind
v4.1.0
21 March 2026, 12:17 UTC
by Sigil Bot
Risk Score
2769
Findings
251
Files Scanned
142
Provenance
Findings by Phase
Phase Ordering
Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.
install-makefile-curl
HIGHMakefile/script pipes remote content to shell
smith_ai-4.1.0/src/open_agent/core/health.py:166
"Install OpenShell:",
"curl -LsSf https://raw.githubusercontent.com/NVIDIA/OpenShell/main/install.sh | sh",
],Why was this flagged?
A script or Makefile pipes content from a remote URL directly into a shell (curl | sh or wget | bash). This is inherently dangerous because the remote content can change at any time, and the command runs with the current user's permissions. Rated HIGH because it requires manual execution (unlike install hooks) but still executes arbitrary remote code.
install-pip-setup-exec
CRITICALsetup.py executes code at install time
smith_ai-4.1.0/src/smith_ai/skills/__init__.py:133
@abstractmethod
def setup(self) -> None:
"""Initialize the skill. Called when skill is loaded."""Why was this flagged?
This setup.py calls subprocess, os.system, exec, or eval during package installation. Legitimate packages rarely need to execute arbitrary commands at install time. This pattern is commonly used by malicious packages to download and run payloads, exfiltrate environment variables, or establish persistence. Rated CRITICAL because it runs with the installer's full permissions.
install-pip-setup-exec
CRITICALsetup.py executes code at install time
smith_ai-4.1.0/tests/test_enterprise.py:18
@pytest.fixture(autouse=True)
def setup(self):
from smith_ai.tools import ToolRegistryWhy was this flagged?
This setup.py calls subprocess, os.system, exec, or eval during package installation. Legitimate packages rarely need to execute arbitrary commands at install time. This pattern is commonly used by malicious packages to download and run payloads, exfiltrate environment variables, or establish persistence. Rated CRITICAL because it runs with the installer's full permissions.
install-pip-setup-exec
CRITICALsetup.py executes code at install time
smith_ai-4.1.0/tests/test_smith_ai.py:33
@pytest.fixture(autouse=True)
def setup(self):
register_builtin_tools()Why was this flagged?
This setup.py calls subprocess, os.system, exec, or eval during package installation. Legitimate packages rarely need to execute arbitrary commands at install time. This pattern is commonly used by malicious packages to download and run payloads, exfiltrate environment variables, or establish persistence. Rated CRITICAL because it runs with the installer's full permissions.
Badge
Markdown
[](https://sigilsec.ai/scans/6F0C4D32-599A-4DA2-9DD6-E226573D4DA1)HTML
<a href="https://sigilsec.ai/scans/6F0C4D32-599A-4DA2-9DD6-E226573D4DA1"><img src="https://sigilsec.ai/badge/pypi/smith-ai" alt="Sigil Scan"></a>Run This Scan Yourself
Scan your own packages
Run Sigil locally to audit any package before it touches your codebase.
Early Access
Get cloud scanning, threat intel, and CI/CD integration.
Join 150+ developers on the waitlist.
Get threat intelligence and product updates
Security research, new threat signatures, and product updates. No spam.
Other pypi scans
Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.