首页龙虾技能列表 › AI守门人

AI守门人

v1.0.4

LLM API 代理服务管理工具。支持多 Provider 转发(百炼/OpenRouter/NVIDIA)、内容安全审计、健康监控。使用场景:(1)启动/停止/重启代理服务 (2)查看代理状态和统计 (3)配置内容过滤规则。

0· 200·0 当前·0 累计
by @394286006·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/12
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
The skill is broadly what it claims (a local LLM proxy with filtering), but it contains behaviors that merit caution—most notably local logging of proxied requests (which may include API keys / sensitive payloads) and an optional third-layer review that could make outbound calls if enabled.
评估建议
This skill is largely consistent with its advertised purpose, but proceed cautiously. Before installing or running: (1) Inspect the code paths that perform logging and confirm whether Authorization headers or request bodies are sanitized — if not, assume API keys and sensitive payloads will be written to ~/.openclaw/logs/llm-proxy and adjust log permissions/rotation or modify the code to redact sensitive headers. (2) Keep the proxy bound to 127.0.0.1 (do not change listen_host to 0.0.0.0) unless...
详细分析 ▾
用途与能力
Name/description (LLM proxy + content auditing) match the files and runtime instructions: control scripts, a Python proxy, config and filtering rules. The listed provider endpoints align with a multi-provider proxy. There are no surprising external service credentials requested in metadata.
指令范围
SKILL.md and README instruct running the provided scripts (./scripts/llm-proxy-ctl.sh) and editing scripts/llm-proxy-config.json; the code reads that config and writes logs under ~/.openclaw/logs/llm-proxy. That is expected for a proxy. Caveat: the skill appears to log request information (request logs described in README and a LogWriter in code). Logged request entries may include headers or bodies that contain API keys or other secrets unless explicitly sanitized. SKILL.md does not mention redaction. Also the rules include an optional L3 review model (disabled by default) which, if enabled, could cause outbound model calls.
安装机制
No install spec; this is instruction-only with bundled scripts. Nothing is downloaded from remote hosts during install. The code is included in the skill bundle, so no external install URLs or archive extraction risks are present.
凭证需求
The registry metadata declares no required env vars or credentials, which is consistent with a local proxy. However the code does read optional environment variables (LLM_PROXY_CONFIG, LLM_PROXY_PORT, RULES_FILE) and will forward whichever Authorization header the client supplies to upstream providers. Users may therefore send provider API keys through the proxy; combined with local logging, those keys could be stored. No unrelated credentials are requested by the skill itself.
持久化与权限
always:false and no special platform privileges requested. The skill writes logs and PID files to user-owned locations (~/.openclaw, /tmp) — normal for a local service. It does not request to auto-enable itself or modify other skills.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.42026/3/18

- Major documentation update: simplified and reorganized SKILL.md for improved readability and user guidance. - Replaced _meta.json and reference docs with a single configuration and shared script file. - Added new config file: scripts/llm-proxy-config.json for centralized and clearer service configuration. - Added utility script: scripts/llm-proxy-common.sh. - Clarified and categorized all supported providers and security layers. - Expanded instructions for both command and manual usage, service logging, and advanced content filtering settings.

● 可疑

安装命令 点击复制

官方npx clawhub@latest install llm-proxy
镜像加速npx clawhub@latest install llm-proxy --registry https://cn.clawhub-mirror.com

技能文档

本地 LLM API 代理服务,提供多 Provider 转发、内容安全审计等功能。

功能特性

  • 🔄 多 Provider 转发 - 百炼、OpenRouter、NVIDIA NIM
  • 🔒 内容安全审计 - 两层审核机制(恶意指令 + 敏感内容)
  • 📊 请求统计 - 实时统计请求数、错误率、告警数
  • 📝 日志记录 - 所有请求 JSONL 格式存档

快速开始

查看代理状态

llm-proxy-ctl.sh status

启动代理

llm-proxy-ctl.sh start

停止代理

llm-proxy-ctl.sh stop

重启代理

llm-proxy-ctl.sh restart

查看实时日志

llm-proxy-ctl.sh logs

Provider 配置

代理支持以下 Provider 路径前缀:

前缀目标 API
/bailian阿里云百炼 https://coding.dashscope.aliyuncs.com/v1
/openrouterOpenRouter https://openrouter.ai/api/v1
/nvdNVIDIA NIM https://integrate.api.nvidia.com/v1

使用示例

# 通过代理调用百炼 API
curl http://127.0.0.1:18888/bailian/chat/completions \
  -H "Authorization: Bearer $BAILIAN_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "qwen-plus", "messages": [{"role": "user", "content": "Hello"}]}'

# 通过代理调用 OpenRouter curl http://127.0.0.1:18888/openrouter/chat/completions \ -H "Authorization: Bearer $OPENROUTER_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "anthropic/claude-3-opus", "messages": [{"role": "user", "content": "Hello"}]}'

# 通过代理调用 NVIDIA NIM curl http://127.0.0.1:18888/nvd/chat/completions \ -H "Authorization: Bearer $NVIDIA_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model": "meta/llama-3.1-8b-instruct", "messages": [{"role": "user", "content": "Hello"}]}'

健康检查

HTTP 端点

# 健康状态
curl http://127.0.0.1:18888/health

# 统计数据 curl http://127.0.0.1:18888/stats

健康检查响应

{
  "status": "ok",
  "stats": {
    "total_requests": 1234,
    "total_responses": 1200,
    "blocked": 5,
    "warnings": 12,
    "errors": 34
  },
  "uptime": 3600,
  "rules_loaded": {
    "layer1": 6,
    "layer2": 3,
    "whitelist": 2
  }
}

内容安全审计

两层审核机制

第一层:恶意指令检测

  • 危险系统命令(rm -rf /、mkfs、dd 等)
  • 提权操作(sudo su、chmod u+s)
  • SQL 注入/删除
  • 数据外泄(curl -d @、nc -e)
  • 后门/反弹 shell

第二层:敏感内容检测

  • 个人身份信息(身份证、手机号、邮箱)
  • 银行卡/信用卡号
  • 敏感关键词

规则配置

规则文件:{baseDir}/scripts/content-filter-rules.json

{
  "layer1_malicious": {
    "enabled": true,
    "rules": [
      {
        "id": "CMD-001",
        "name": "危险系统命令",
        "severity": "critical",
        "patterns": ["rm\\s+-rf\\s+[/~]", "mkfs\\.", "dd\\s+if=.of=/dev/"]
      }
    ]
  },
  "layer2_sensitive": {
    "enabled": true,
    "rules": [...]
  },
  "whitelist": ["已授权的.操作"]
}

日志文件

文件内容
~/.openclaw/logs/llm-proxy/proxy-YYYY-MM-DD.jsonl请求日志
~/.openclaw/logs/llm-proxy/service.log服务日志

日志格式

{
  "timestamp": "2026-03-17T12:34:56.789",
  "request_id": "abc12345",
  "provider": "bailian",
  "path": "/chat/completions",
  "status": 200,
  "duration_ms": 1234,
  "alerts": [],
  "request_size": 256,
  "response_size": 1024
}

常见问题

端口被占用

# 查看占用端口的进程
lsof -i :18888

# 终止进程 kill -9

# 或重启代理(会自动清理) llm-proxy-ctl.sh restart

代理无响应

# 检查健康状态
curl http://127.0.0.1:18888/health

# 重启代理 llm-proxy-ctl.sh restart

查看错误日志

tail -100 ~/.openclaw/logs/llm-proxy/service.log

核心文件

文件说明
{baseDir}/scripts/llm-proxy.py代理主程序
{baseDir}/scripts/llm-proxy-ctl.sh启动控制脚本
{baseDir}/scripts/content-filter-rules.json内容过滤规则

注意: 代理服务需要 Python 3.6+ 和网络连接。

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务