Neural Network Ops
v1.0.0Diagnoses and tunes LLM providers (Groq, OpenRouter, Ollama), resolves rate limits/timeouts, and selects stable primary/fallback models. Use when the bot is...
详细分析 ▾
运行时依赖
版本
- Initial release of neural-network-ops skill for diagnosing and tuning LLM providers (Groq, OpenRouter, Ollama). - Provides fast triage checks and routing policy recommendations to maintain OpenClaw responsiveness. - Details stable model constraints and fallback strategies for local and cloud providers. - Includes a step-by-step recovery playbook for common issues such as silent bot, rate limits, and provider errors. - Defines standard output format for health reports and introduces strict operational guardrails.
安装命令 点击复制
技能文档
Purpose
Keep OpenClaw responsive by managing model providers, routing, and fallback behavior.
Fast Triage
Run these checks first:
systemctl is-active openclaw-gateway ollama
journalctl -u openclaw-gateway -n 40 --no-pager
free -h
Focus on these log patterns:
rate limit reachedModel context window too smallUnknown modelNo endpoints availablesendMessage failedembedded run timeoutRemoved orphaned user message
Routing Policy
Use this default priority for production:
groq/llama-3.3-70b-versatile(fastest cloud path)openrouter/xiaomi/mimo-v2-pro(high quality backup)openrouter/meta-llama/llama-3.3-70b-instruct:freeollama/qwen2.5:7b(last-resort local fallback)
Avoid 35B local models on 30GB RAM CPU servers for real-time Telegram replies.
Stable Model Constraints
For local Ollama fallbacks:
contextWindow >= 16000- Keep
maxTokensmoderate (1024-2048) for latency - Pre-warm after restart if local fallback is expected
Example local provider entry:
{
"id": "qwen2.5:7b",
"name": "Qwen 2.5 7B (local)",
"contextWindow": 32768,
"maxTokens": 2048
}
Recovery Playbook
1) Bot silent in Telegram
journalctl -u openclaw-gateway --since '10 min ago' --no-pager
If sendMessage failed, check network/provider errors first, then restart:
systemctl restart openclaw-gateway
2) Repeated orphaned user message
systemctl stop openclaw-gateway
rm -rf /root/.openclaw/.openclaw/agents/main/sessions/*
echo '{}' > /root/.openclaw/.openclaw/agents/main/sessions/sessions.json
chmod 600 /root/.openclaw/.openclaw/agents/main/sessions/sessions.json
systemctl start openclaw-gateway
3) Groq/OpenRouter rate limits
- Keep Groq as primary, but ensure at least one non-free fallback.
- For OpenRouter 404 privacy/policy errors, adjust data-policy settings in OpenRouter dashboard.
- Do not loop retries endlessly; rely on fallback chain.
4) Local fallback too slow
- Restart Ollama cleanly and warm one small model.
- Do not keep multiple heavy runners resident.
systemctl restart ollama
curl -s -X POST http://127.0.0.1:11434/api/chat \
-H 'Content-Type: application/json' \
-d '{"model":"qwen2.5:7b","messages":[{"role":"user","content":"hi"}],"stream":false}'
Output Format
When reporting health, return:
## Status
- Gateway:
- Telegram provider:
- Primary model:
- Fallback chain:
Findings
Actions Applied
Next Step
Guardrails
- Never expose raw API keys in replies.
- Never execute irreversible financial actions automatically.
- Ask for explicit confirmation before account registrations or external postings.
- Prefer reversible config changes and keep backups before major edits.
免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制