首页龙虾技能列表 › Llmrouter — 技能工具

🔀 Llmrouter — 技能工具

v0.1.1

Intelligent LLM proxy that routes requests to appropriate models based on complexity. Save money by using cheaper models for simple tasks. Tested with Anthropic, OpenAI, Gemini, Kimi/Moonshot, and Ollama.

6· 2,447·8 当前·8 累计·💬 2
by @alexrudloff·MIT-0
下载技能包 项目主页
License
MIT-0
最后更新
2026/2/26
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
medium confidence
The skill's instructions, requirements, and declared primary credential broadly match its stated purpose (an LLM routing proxy); there are minor metadata/instruction inconsistencies to be aware of but nothing that indicates intentional misdirection.
评估建议
This skill is an instruction-only wrapper around an open-source LLM router. Before installing: 1) Review the upstream repository (https://github.com/alexrudloff/llmrouter) and inspect server.py and config.yaml to understand how API keys are used and stored. 2) Expect to provide API keys for any providers you want to use (Anthropic is shown as primary; add OpenAI/Google/Kimi keys to config.yaml as needed). 3) Run it in an isolated environment (virtualenv, container, or VM) and bind to localhost (...
详细分析 ▾
用途与能力
The skill is an LLM routing proxy and the declared requirements (python3, pip) and the primary credential (ANTHROPIC_API_KEY) are consistent with that purpose. The SKILL.md also documents support for multiple providers (OpenAI, Google, Kimi, Ollama) and expects corresponding provider keys in config.yaml. Registry metadata lists no required env vars but does include primaryEnv=ANTHROPIC_API_KEY — a minor inconsistency but explainable (the router supports multiple provider keys in config rather than fixed env vars).
指令范围
The runtime instructions are limited to cloning the repo, creating a venv, installing requirements, optionally pulling local models with Ollama, editing config.yaml/ROUTES.md, and running server.py (or creating an optional macOS LaunchAgent). The instructions reference provider API keys and local files used by the router (config.yaml, ROUTES.md), but do not instruct reading unrelated system files or exfiltrating data.
安装机制
This is an instruction-only skill (no install spec). The SKILL.md instructs cloning the public GitHub repo and running pip install -r requirements.txt — a conventional install path. No high-risk downloads or obscure URLs are used in the provided instructions.
凭证需求
The skill declares a primary credential (ANTHROPIC_API_KEY) which is reasonable for using Anthropic as a provider. SKILL.md also expects other provider keys to be added to config.yaml when using those providers; the registry metadata's 'Required env vars: none' is slightly inconsistent with examples in the docs that use ANTHROPIC_API_KEY in an Authorization header. Overall the amount of credential access requested is proportional to a multi-provider router, but users should expect to supply multiple provider keys in configuration.
持久化与权限
The skill does not request always:true and is user-invocable. The only persistence step in the docs is an optional macOS LaunchAgent recipe the user can install to run the server at boot; this is explicitly optional (and the server defaults to binding 127.0.0.1). No instructions attempt to modify other skills or system-wide agent configuration.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

🖥️ OSmacOS · Linux

版本

latestv0.1.12026/2/2

llmrouter v0.1.1 - Expanded provider support: now tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama. - Added provider-agnostic classification: classifier can run locally on Ollama or remotely on Anthropic, OpenAI, Google, or Kimi. - Updated configuration instructions and defaults for broader provider compatibility. - Improved OpenClaw integration documentation and setup. - Minor dependency and environment requirements changes (Ollama now optional; Python 3.10+ and venv use encouraged). - No functional code changes—README/metadata/documentation only.

● 无害

安装命令 点击复制

官方npx clawhub@latest install llmrouter
镜像加速npx clawhub@latest install llmrouter --registry https://cn.clawhub-mirror.com

技能文档

An intelligent proxy that classifies incoming requests by complexity and routes them to appropriate LLM models. Use cheaper/faster models for simple tasks and reserve expensive models for complex ones.

Works with OpenClaw to reduce token usage and API costs by routing simple requests to smaller models.

Status: Tested with Anthropic, OpenAI, Google Gemini, Kimi/Moonshot, and Ollama.

Quick Start

Prerequisites

  • Python 3.10+ with pip
  • Ollama (optional - only if using local classification)
  • Anthropic API key or Claude Code OAuth token (or other provider key)

Setup

# Clone if not already present
git clone https://github.com/alexrudloff/llmrouter.git
cd llmrouter

# Create virtual environment (required on modern Python) python3 -m venv venv source venv/bin/activate

# Install dependencies pip install -r requirements.txt

# Pull classifier model (if using local classification) ollama pull qwen2.5:3b

# Copy and customize config cp config.yaml.example config.yaml # Edit config.yaml with your API key and model preferences

Verify Installation

# Start the server
source venv/bin/activate
python server.py

# In another terminal, test health endpoint curl http://localhost:4001/health # Should return: {"status": "ok", ...}

Start the Server

python server.py

Options:

  • --port PORT - Port to listen on (default: 4001)
  • --host HOST - Host to bind (default: 127.0.0.1)
  • --config PATH - Config file path (default: config.yaml)
  • --log - Enable verbose logging
  • --openclaw - Enable OpenClaw compatibility (rewrites model name in system prompt)

Configuration

Edit config.yaml to customize:

Model Routing

# Anthropic routing
models:
  super_easy: "anthropic:claude-haiku-4-5-20251001"
  easy: "anthropic:claude-haiku-4-5-20251001"
  medium: "anthropic:claude-sonnet-4-20250514"
  hard: "anthropic:claude-opus-4-20250514"
  super_hard: "anthropic:claude-opus-4-20250514"

# OpenAI routing models: super_easy: "openai:gpt-4o-mini" easy: "openai:gpt-4o-mini" medium: "openai:gpt-4o" hard: "openai:o3-mini" super_hard: "openai:o3"

# Google Gemini routing models: super_easy: "google:gemini-2.0-flash" easy: "google:gemini-2.0-flash" medium: "google:gemini-2.0-flash" hard: "google:gemini-2.0-flash" super_hard: "google:gemini-2.0-flash"

Note: Reasoning models are auto-detected and use correct API params.

Classifier

Three options for classifying request complexity:

Local (default) - Free, requires Ollama:

classifier:
  provider: "local"
  model: "qwen2.5:3b"

Anthropic - Uses Haiku, fast and cheap:

classifier:
  provider: "anthropic"
  model: "claude-haiku-4-5-20251001"

OpenAI - Uses GPT-4o-mini:

classifier:
  provider: "openai"
  model: "gpt-4o-mini"

Google - Uses Gemini:

classifier:
  provider: "google"
  model: "gemini-2.0-flash"

Kimi - Uses Moonshot:

classifier:
  provider: "kimi"
  model: "moonshot-v1-8k"

Use remote (anthropic/openai/google/kimi) if your machine can't run local models.

Supported Providers

  • anthropic:claude- - Anthropic Claude models (tested)
  • openai:gpt-, openai:o1-, openai:o3- - OpenAI models (tested)
  • google:gemini- - Google Gemini models (tested)
  • kimi:kimi-k2.5, kimi:moonshot- - Kimi/Moonshot models (tested)
  • local:model-name - Local Ollama models (tested)

Complexity Levels

LevelUse CaseDefault Model
super_easyGreetings, acknowledgmentsHaiku
easySimple Q&A, remindersHaiku
mediumCoding, emails, researchSonnet
hardComplex reasoning, debuggingOpus
super_hardSystem architecture, proofsOpus

Customizing Classification

Edit ROUTES.md to tune how messages are classified. The classifier reads the table in this file to determine complexity levels.

API Usage

The router exposes an OpenAI-compatible API:

curl http://localhost:4001/v1/chat/completions \
  -H "Authorization: Bearer $ANTHROPIC_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llm-router",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Testing Classification

python classifier.py "Write a Python sort function"
# Output: medium

python classifier.py --test # Runs test suite

Running as macOS Service

Create ~/Library/LaunchAgents/com.llmrouter.plist:





    Label
    com.llmrouter
    ProgramArguments
    
        /path/to/llmrouter/venv/bin/python
        /path/to/llmrouter/server.py
        --openclaw
    
    RunAtLoad
    
    KeepAlive
    
    WorkingDirectory
    /path/to/llmrouter
    StandardOutPath
    /path/to/llmrouter/logs/stdout.log
    StandardErrorPath
    /path/to/llmrouter/logs/stderr.log


Important: Replace /path/to/llmrouter with your actual install path. Must use the venv python, not system python.

# Create logs directory
mkdir -p ~/path/to/llmrouter/logs

# Load the service launchctl load ~/Library/LaunchAgents/com.llmrouter.plist

# Verify it's running curl http://localhost:4001/health

# To stop/restart launchctl unload ~/Library/LaunchAgents/com.llmrouter.plist launchctl load ~/Library/LaunchAgents/com.llmrouter.plist

OpenClaw Configuration

Add the router as a provider in ~/.openclaw/openclaw.json:

{
  "models": {
    "providers": {
      "localrouter": {
        "baseUrl": "http://localhost:4001/v1",
        "apiKey": "via-router",
        "api": "openai-completions",
        "models": [
          {
            "id": "llm-router",
            "name": "LLM Router (Auto-routes by complexity)",
            "reasoning": false,
            "input": ["text", "image"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}

Note: Cost is set to 0 because actual costs depend on which model the router selects. The router logs which model handled each request.

Set as Default Model (Optional)

To use the router for all agents by default, add:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "localrouter/llm-router"
      }
    }
  }
}

Using with OAuth Tokens

If your config.yaml uses an Anthropic OAuth token from OpenClaw's ~/.openclaw/auth-profiles.json, the router automatically handles Claude Code identity headers.

OpenClaw Compatibility Mode (Required)

If using with OpenClaw, you MUST start the server with --openclaw:

python server.py --openclaw

This flag enables compatibility features required for OpenClaw:

  • Rewrites model names in responses so OpenClaw shows the actual model being used
  • Handles tool name and ID remapping for proper tool call routing

Without this flag, you may encounter errors when using the router with OpenClaw.

Common Tasks

  • Check server status: curl http://localhost:4001/health
  • View current config: cat config.yaml
  • Test a classification: python classifier.py "your message"
  • Run classification tests: python classifier.py --test
  • Restart server: Stop and run python server.py again
  • View logs (if running as service): tail -f logs/stdout.log

Troubleshooting

"externally-managed-environment" error

Python 3.11+ requires virtual environments. Create one:
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

"Connection refused" on port 4001

Server isn't running. Start it:
source venv/bin/activate && python server.py

Classification returns wrong complexity

Edit ROUTES.md to tune classification rules. The classifier reads this file to determine complexity levels.

Ollama errors / "model not found"

Ensure Ollama is running and the model is pulled:
ollama serve  # Start Ollama if not running
ollama pull qwen2.5:3b

OAuth token not working

Ensure your token in config.yaml starts with sk-ant-oat. The router auto-detects OAuth tokens and adds required identity headers.

LaunchAgent not starting

Check logs and ensure paths are absolute:
cat ~/Library/LaunchAgents/com.llmrouter.plist  # Verify paths
cat /path/to/llmrouter/logs/stderr.log  # Check for errors

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务