Skip to main content
Scans/pypi/llmfy

llmfy

pypi

Share

Summary

llmfy v0.4.14 was classified as CRITICAL RISK with a risk score of 102. Sigil detected 8 findings across 105 files, covering phases including provenance, code patterns. Review the findings below before installing this package.

Package description: `LLMfy` is a framework for developing applications with large language models (LLMs).

CRITICAL RISK(102)

v0.4.14

18 March 2026, 16:56 UTC

by Sigil Bot

Risk Score

102

Findings

8

Files Scanned

105

Provenance

Findings by Phase

Phase Ordering

Phases are ordered by criticality, with the most dangerous at the top. Click any phase header to expand or collapse its findings. Critical phases are expanded by default.

Badge

Sigil scan badge for pypi/llmfy

Markdown

[![Sigil Scan](https://sigilsec.ai/badge/pypi/llmfy)](https://sigilsec.ai/scans/A6C1EDB9-4612-42F3-86A1-04475740FCB4)

HTML

<a href="https://sigilsec.ai/scans/A6C1EDB9-4612-42F3-86A1-04475740FCB4"><img src="https://sigilsec.ai/badge/pypi/llmfy" alt="Sigil Scan"></a>

Run This Scan Yourself

Scan your own packages

Run Sigil locally to audit any package before it touches your codebase.

curl -sSL https://sigilsec.ai/install.sh | sh
Read the docs →Free. Apache 2.0.

Early Access

Get cloud scanning, threat intel, and CI/CD integration.

Join 150+ developers on the waitlist.

Get threat intelligence and product updates

Security research, new threat signatures, and product updates. No spam.

Other pypi scans

Believe this result is incorrect? Request a review or see our Terms of Service and Methodology.

Scanned bySigil Bot