Back to Home
FirebirdSolutions icon

Guardian

Verified Safe

by FirebirdSolutions

Overview

An AI safety system designed to detect mental health crises and other harmful behaviors, prevent AI hallucination of fake crisis resources, and provide verified, region-specific support.

Installation

Run Command
python -m guardian_llm.cli --interactive

Environment Variables

  • ANTHROPIC_API_KEY

Security Notes

The project emphasizes privacy and on-device deployment, reducing network exposure for user data. It's designed to prevent AI hallucination of crisis resources, which is a core safety feature. The use of `trust_remote_code=True` for Hugging Face models (like Qwen) is standard but implies trust in the remote model's codebase. The `export.py` script uses `subprocess.run` to call external `llama.cpp` tools for GGUF conversion, which introduces dependency on the security of those external tools. No hardcoded API keys were found; API keys are expected from environment variables.

Similar Servers

Stats

Interest Score0
Security Score9
Cost ClassMedium
Avg Tokens300
Stars0
Forks0
Last Update2025-12-07

Tags

crisis-detectionmental-healthai-safetyllmfine-tuning