首页龙虾技能列表 › Neural Network Ops

Neural Network Ops

v1.0.0

Diagnoses and tunes LLM providers (Groq, OpenRouter, Ollama), resolves rate limits/timeouts, and selects stable primary/fallback models. Use when the bot is...

0· 141·1 当前·1 累计
by @utromaya-code·MIT-0
下载技能包
License
MIT-0
最后更新
2026/3/23
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
high confidence
The skill's requirements and instructions are coherent with its stated purpose (diagnosing and tuning model providers), but it contains high-privilege operational steps (service restarts, file deletion under /root) that require caution before running on production systems.
评估建议
This instruction-only skill appears to be what it says: an ops playbook for OpenClaw model providers. It does include commands that restart services and delete session files under /root, so only run its steps when you understand the consequences and have backups. Recommended precautions: 1) Use it in staging first; verify paths and service names match your installation. 2) Require explicit human confirmation before executing restart or rm commands. 3) Do not grant the agent automatic root/sudo r...
详细分析 ▾
用途与能力
Name and description claim provider/gateway diagnostics and routing; the SKILL.md contains exactly the expected checks (journalctl, systemctl), routing priorities, fallback config examples, and recovery steps for OpenClaw, Groq, OpenRouter, and Ollama — these are proportionate to the stated purpose.
指令范围
Instructions are specific and focused on service health and model routing. However they include destructive/privileged actions (rm -rf of session files under /root/.openclaw, systemctl restart/stop/start) and direct curl calls to a local API. These are in-scope for operations but require operator confirmation and appropriate privileges; the instructions assume root/systemd and a particular filesystem layout.
安装机制
No install spec or code files are present (instruction-only). This minimizes supply-chain risk; nothing is downloaded or written by the installer.
凭证需求
The skill declares no required environment variables, credentials, or config paths. The SKILL.md also does not attempt to read external secrets or request API keys — consistent with its purpose of local ops and routing configuration.
持久化与权限
The skill does not request permanent presence (always:false) and is user-invocable. Nevertheless, its recommended actions require elevated privileges (systemctl, modifying /root files). Ensure the agent is not given blanket root execution rights or allowed to run these commands without explicit human confirmation.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/23

- Initial release of neural-network-ops skill for diagnosing and tuning LLM providers (Groq, OpenRouter, Ollama). - Provides fast triage checks and routing policy recommendations to maintain OpenClaw responsiveness. - Details stable model constraints and fallback strategies for local and cloud providers. - Includes a step-by-step recovery playbook for common issues such as silent bot, rate limits, and provider errors. - Defines standard output format for health reports and introduces strict operational guardrails.

● 无害

安装命令 点击复制

官方npx clawhub@latest install neural-network-diagnostics
镜像加速npx clawhub@latest install neural-network-diagnostics --registry https://cn.clawhub-mirror.com

技能文档

Purpose

Keep OpenClaw responsive by managing model providers, routing, and fallback behavior.

Fast Triage

Run these checks first:

systemctl is-active openclaw-gateway ollama
journalctl -u openclaw-gateway -n 40 --no-pager
free -h

Focus on these log patterns:

  • rate limit reached
  • Model context window too small
  • Unknown model
  • No endpoints available
  • sendMessage failed
  • embedded run timeout
  • Removed orphaned user message

Routing Policy

Use this default priority for production:

  • groq/llama-3.3-70b-versatile (fastest cloud path)
  • openrouter/xiaomi/mimo-v2-pro (high quality backup)
  • openrouter/meta-llama/llama-3.3-70b-instruct:free
  • ollama/qwen2.5:7b (last-resort local fallback)

Avoid 35B local models on 30GB RAM CPU servers for real-time Telegram replies.

Stable Model Constraints

For local Ollama fallbacks:

  • contextWindow >= 16000
  • Keep maxTokens moderate (1024-2048) for latency
  • Pre-warm after restart if local fallback is expected

Example local provider entry:

{
  "id": "qwen2.5:7b",
  "name": "Qwen 2.5 7B (local)",
  "contextWindow": 32768,
  "maxTokens": 2048
}

Recovery Playbook

1) Bot silent in Telegram

journalctl -u openclaw-gateway --since '10 min ago' --no-pager

If sendMessage failed, check network/provider errors first, then restart:

systemctl restart openclaw-gateway

2) Repeated orphaned user message

systemctl stop openclaw-gateway
rm -rf /root/.openclaw/.openclaw/agents/main/sessions/*
echo '{}' > /root/.openclaw/.openclaw/agents/main/sessions/sessions.json
chmod 600 /root/.openclaw/.openclaw/agents/main/sessions/sessions.json
systemctl start openclaw-gateway

3) Groq/OpenRouter rate limits

  • Keep Groq as primary, but ensure at least one non-free fallback.
  • For OpenRouter 404 privacy/policy errors, adjust data-policy settings in OpenRouter dashboard.
  • Do not loop retries endlessly; rely on fallback chain.

4) Local fallback too slow

  • Restart Ollama cleanly and warm one small model.
  • Do not keep multiple heavy runners resident.
systemctl restart ollama
curl -s -X POST http://127.0.0.1:11434/api/chat \
  -H 'Content-Type: application/json' \
  -d '{"model":"qwen2.5:7b","messages":[{"role":"user","content":"hi"}],"stream":false}'

Output Format

When reporting health, return:

## Status
  • Gateway:
  • Telegram provider:
  • Primary model:
  • Fallback chain:

Findings

Actions Applied

Next Step

Guardrails

  • Never expose raw API keys in replies.
  • Never execute irreversible financial actions automatically.
  • Ask for explicit confirmation before account registrations or external postings.
  • Prefer reversible config changes and keep backups before major edits.
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务