windows Windows Ollama — Windows多机运行
v1.0.0在 Windows 上部署 Ollama,一键运行 Llama、Qwen、DeepSeek、Phi、Mistral 等主流模型,并支持局域网多 PC 集群路由,实现分布式推理与负载均衡。
0· 64·2 当前·2 累计
下载技能包
最后更新
2026/4/4
安全扫描
OpenClaw
可疑
medium confidenceThe skill's instructions generally match its stated purpose (running an Ollama herd on Windows), but there are several proportionality and exposure concerns (untracked pip install, opening a network service/firewall rule, and slight metadata inconsistencies) that merit caution before installing.
评估建议
This skill appears to do what it says (help run an Ollama fleet on Windows), but installing and running it will install third-party code via pip and open a network router port on your PC. Before proceeding: 1) Inspect the 'ollama-herd' package source (the linked GitHub repo) and confirm the pip package you will install matches that repo; 2) prefer installing from a pinned release or reviewing the package contents locally before running; 3) only enable the firewall rule on trusted private network...详细分析 ▾
ℹ 用途与能力
The SKILL.md describes exactly the intended functionality (install Ollama, pip install ollama-herd, run herd and herd-node, open a router port, monitor nodes). That purpose explains the need for curl/wget and optional python/pip/nvidia-smi. However, the skill's declared required binaries do not list the 'herd'/'herd-node' or Ollama binaries it instructs you to run — the instructions instead tell the user to install them. This is a minor coherence mismatch (metadata doesn't enumerate the actual runtime binaries the instructions rely on).
⚠ 指令范围
The instructions direct the user to run a pip install (fetch and execute third-party Python code), start a network service listening on port 11435, add a Windows Firewall rule to allow inbound connections, and read/write per-user environment variables and local fleet files (~/.fleet-manager/*). Those actions are within the stated purpose but broaden the attack surface: installing an external package and opening a listener on the local network can permit remote clients to send arbitrary inference requests to your machine and may process potentially sensitive data. The SKILL.md otherwise confines network calls to local endpoints and to trusted sites (ollama.ai, GitHub links).
ℹ 安装机制
There is no formal install spec in the skill bundle — it's instruction-only. It tells the user to run 'pip install ollama-herd' and to download Ollama from ollama.ai. Pip installing an external package is a normal but non-trivial action: it fetches and executes code from PyPI (or the package's index) and can install daemons/binaries. This is expected for the described functionality but increases risk compared to an instruction-only skill that requires only preinstalled tools.
✓ 凭证需求
The skill does not request credentials or secrets and declares no required env vars. The content suggests setting OLLAMA_* user environment variables for performance; these are reasonable and directly relevant to Ollama's operation. No unrelated credentials, API keys, or unexpected config paths are requested beyond the service's own fleet files (~/.fleet-manager).
⚠ 持久化与权限
The skill instructs you to run a long-running router/agent (herd) that listens on a TCP port and to add an inbound firewall rule, which grants persistent network exposure. 'always' is false and the skill itself does not claim to modify other skills, but installing and running the herd process creates a persistent service that could accept requests from other machines. This persistent network presence combined with a third-party pip-installed package increases the blast radius if the package or service is malicious or misconfigured.
安全有层次,运行前请审查代码。
运行时依赖
🖥️ OSWindows
版本
latestv1.0.02026/4/4
- Initial release of Windows Ollama: run Ollama across multiple Windows PCs with NVIDIA RTX GPUs. - Multi-machine load balancing, health monitoring, and real-time dashboard included. - Supports routing Ollama inference for Llama, Qwen, DeepSeek, Phi, and Mistral models. - Simple setup and monitoring commands for Windows environment. - Guardrails: explicit user confirmation required for model downloads/deletions; no automatic model pulls.
● Pending
安装命令
点击复制官方npx clawhub@latest install windows-ollama
镜像加速npx clawhub@latest install windows-ollama --registry https://cn.longxiaskill.com