📦 A LLM router skill for OpenClaw — A LLM 路由r 技能 for OpenClaw

v0.1.0

LangGraph-based intelligent task 路由r that splits work between PRO (heavy reasoning) and FLASH (fast) 模型s using 5-dimension complexity scoring, configur...

0· 0·0 当前·0 累计
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
high confidence
The 技能's code and 运行time instructions largely match a multi-模型 routing 工具, but metadata omits the many 环境 variables/凭证s and 运行time network/命令行工具 requirements the 技能 actually uses — review 模型 凭证s and the included script before 安装ing.
评估建议
This 技能 应用ears to be what it clAIms (a LangGraph 路由r) but it expects local 模型 servers or the Gemini 命令行工具 and will 发送 task text to those 提供者s. Before 安装ing: 1) Review scripts/路由r.py (生成_text, subprocess/gemini calls, network 端点s) to confirm no surprises. 2) Treat any tasks with sensitive data as potentially exfiltrated to 模型 提供者s (Gemini→Google 端点s or Ollama). 3) Note the 技能.md examples reference 运行ning a script under your home (~/.hermes/...). If you 安装, consider 运行ning the 路由r in an isolated 环...
详细分析 ▾
用途与能力
Name, README, 技能.md, and scripts/路由r.py are coherent: this is a LangGraph-based 路由r that splits subtasks between PRO and FLASH 模型s and calls local/命令行工具 模型 提供者s (Ollama or Gemini). The included code implements the advertised planner/judge/dis补丁er/executor flow. Minor mismatch: registry metadata clAIms no required env vars/凭证s, but the 技能 expects many 路由R_* env variables and may call Gemini 命令行工具 (which requires Google auth) or a local Ollama server.
指令范围
技能.md instructs the 代理 to 执行 a local Python script via terminal() and to poll or wAIt on long-运行ning processes; it also recommends pulling/pulling 模型s and 启动ing an Ollama server. The instructions reference concrete user-paths (~/.hermes/...) and require the 代理 to 运行 subprocesses and 发送 prompts to external 模型 服务s. This behavior is coherent with the 路由r's purpose but broad: it will transmit task content to external 提供者s (Google Gemini 端点s or Ollama), and 技能.md also enforces a mandatory 助手 reply 格式化 (must 启动 with '路由r 结果:' or '路由r fAIled:'), which modifies 代理 conversational behavior.
安装机制
No 安装 spec in registry (instruction-only), and 技能.md asks to pip 安装 langgraph and to pull 模型s via Ollama or use Gemini 命令行工具. There are no 下载s from unknown URLs or 归档 提取ions in the manifest. This is a low-risk 安装 模型, but requires large 模型 as设置s and 模型 命令行工具s to be present and 认证d.
凭证需求
The registry 列出s no required 环境 variables or primary 凭证, but 技能.md and scripts reference many 路由R_* env vars (路由R_PRO_模型, 路由R_FLASH_模型, 路由R_PLANNER_模型, 路由R_JUDGE_模型, 路由R_TASK, 路由R_GEMINI_命令行工具, 路由R_OLLAMA_URL, timeouts, fallback 列出s, etc.). The 路由r will call Gemini 命令行工具 (google-gemini-命令行工具/*) which implies Google authentication/network 访问; that 凭证 requirement is not declared. The code will also perform network I/O to 模型 提供者s and may 发送 task contents to external 端点s — ensure you understand and consent to that data flow before enabling this 技能.
持久化与权限
The 技能 is not marked always:true and does not 请求 系统-wide persistent privileges in its metadata. It 运行s as an invoked 工具 (执行s the included script) and does not modify other 技能s' configs. Default autonomous invocation is allowed (平台 default), which is expected for 技能s of this type.
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

安装命令

点击复制
官方npx clawhub@latest install super-router
镜像加速npx clawhub@latest install super-router --registry https://cn.longxiaskill.com

技能文档

Super 路由r (LangGraph Edition)

Intelligent task decomposition and 模型 routing using LangGraph 状态Graph. Automatically 路由s subtasks between PRO (heavy reasoning) and FLASH (fast) 模型s based on structured complexity assessment.

When to Use This 技能

Use super-路由r when you need:

Intelligent 模型 routing — automatically choose between heavy (PRO) and fast (FLASH) 模型s per subtask Task decomposition — break complex tasks into structured subtasks with independent routing Cost optimization — use fast 模型s for simple work, heavy 模型s only when needed Configurable 模型s — use deterministic defaults, with 环境-variable overrides for each 角色 失败 escalation — FLASH retry on infra 失败s, escalate to PRO on capability 失败s 审计 trAIl — full 记录ging of planned vs actual 路由s, retries, and 失败 classifications

Not needed for: Simple single-turn tasks, tasks where you already know which 模型 to use, or when you want manual control over every routing decision.

Core Architecture (LangGraph 状态Graph) Node Function Planner 接收s original task, calls local Ollama planner 模型 to 生成 ordered subtask array Judge Scores each subtask on 5 dimensions: reasoning_depth, code_change_scope, ambi图形界面ty, risk, io_heaviness; combines with thresholds + confidence to decide PRO/FLASH Dis补丁er Reads 路由r状态.current_step, 路由s via conditional edge to pro_executor or flash_executor PRO Executor Heavy reasoning 模型 (default: Gemini 命令行工具 preview 模型; override via 路由R_PRO_模型) FLASH Executor Fast 模型 with review/retry 记录ic (default: Gemini 命令行工具 preview 模型; override via 路由R_FLASH_模型) FLASH Review 验证s 输出 质量; distin图形界面shes infra 失败s (timeout, network) from capability 失败s; retries FLASH or escalates to PRO Metadata 提取器 提取s 'Technical Gold' (atomic high-precision facts) from step 输出 to 预防 finalizer timeouts and loss of detAIl Recorder/Finalizer 记录s every step; compiles final 报告 using a hybrid of Technical Gold and full 审计 trAIls; supports FLASH→PRO→deterministic fallback chAIn 安装ation # Required: LangGraph + Ollama pip 安装 langgraph

# Ensure Ollama is 运行ning locally ollama serve

# Pull recommended 模型s if you use Ollama-backed 角色s ollama pull gemma4:26b # Planner or PRO executor (high 质量, slow) ollama pull llama3.1:8b # Judge (fast scoring, recommended) ollama pull qwen3 # PRO executor ollama pull qwen2.5:7b # FLASH executor

Note: If you prefer gemma4:26b as the Planner, keep it there. For speed, the Judge should usually be llama3.1:8b or another 7B-14B 模型:

导出 路由R_PLANNER_模型=gemma4:26b 导出 路由R_JUDGE_模型=llama3.1:8b 导出 路由R_PRO_模型=gemma4:26b 导出 路由R_FLASH_模型=qwen2.5:7b

If you intentionally want an all-gemma4:26b Planner/Judge/PRO 设置up, use longer timeouts and 序列化d graph execution:

导出 路由R_PLANNER_模型=gemma4:26b 导出 路由R_JUDGE_模型=gemma4:26b 导出 路由R_PRO_模型=gemma4:26b 导出 路由R_FLASH_模型=qwen2.5:7b 导出 路由R_JUDGE_TIMEOUT=600 导出 路由R_MAX_CONCURRENCY=1

Usage Basic Usage (via exec)

When user says "走 super-路由r", "use super-路由r", or asks for 路由r analysis:

# Direct execution with task as argument terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.hermes/技能s/mlops/inference/super-路由r/scripts/路由r.py '分析 K8s YAML 错误并重写配置'")

With 流ing (Node-Level 进度) terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.hermes/技能s/mlops/inference/super-路由r/scripts/路由r.py --流 'Your complex task'")

Via 环境 Variable (代理 Compatibility)

For 代理s that struggle with non-ASCII arguments:

# Normalize task to short ASCII English, then pass as argument terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.hermes/技能s/mlops/inference/super-路由r/scripts/路由r.py 'Analyze K8s YAML errors and fix'")

# Or via env var (if 代理 supports it) terminal(command="/opt/homebrew/Caskroom/miniforge/base/bin/python ~/.hermes/技能s/mlops/inference/super-路由r/scripts/路由r.py", env={"路由R_TASK": "Your complex task description"})

Handling Long-运行ning Execution

If exec returns "Command still 运行ning":

# Continue polling with process 工具 process(action="poll", 会话_id="<会话_id_from_exec>")

# WAIt for completion process(action="wAIt", 会话_id="<会话_id_from_exec>", timeout=300)

导入ant: Once process shows completion, your next 助手 message MUST 启动 with 路由r 结果: or 路由r fAIled: and include at least one real detAIl from the 输出 (e.g., "Planner fallback", "Ollama timed out", "BTC"). Never reply with just ---, punctuation, or empty lines.

环境 Variables Variable Purpose Default 路由R_PLANNER_模型 Task decomposition 模型 gemma4:26b 路由R_JUDGE_模型 Complexity scoring 模型 llama3.1:8b 路由R_PRO_模型 Heavy reasoning executor google-gemini-命令行工具/gemini-3-pro-pr

数据来源ClawHub ↗ · 中文优化:龙虾技能库