🚀 LLM Deploy — 技能工具
v1.0.0在 GPU 服务器上部署 LLM 模型服务(vLLM)。支持多服务器配置,自动检查 GPU 和端口占用,一键部署流行的开源大语言模型。
0· 317·0 当前·0 累计
安全扫描
OpenClaw
可疑
medium confidenceThe skill's purpose (deploying vLLM over SSH) is plausible and mostly consistent, but there are small but important mismatches (no deploy script provided, implicit use of users' SSH keys and home config) that you should understand before installing or running commands.
评估建议
This skill is an instruction-only deployment guide that uses SSH to run commands on your machines. Before using it: (1) Do not copy or run any 'llm-deploy' script unless you have the actual script source — this package does not include an executable even though the README suggests one. (2) Understand that SSH will use your local SSH keys/agent and that the skill will read/write ~/.config/llm-deploy/*. (3) Inspect all remote commands (tmux, conda activate, vllm serve) before running them — they w...详细分析 ▾
ℹ 用途与能力
The name/description match the instructions: this is an SSH-based how-to for deploying vLLM on GPU servers. Requesting the ssh binary is appropriate. However the README suggests an 'llm-deploy' script that users should copy into PATH, but the package contains only SKILL.md and README.md (no script). That omission is a mismatch: either the skill is instruction-only (agent runs SSH commands directly) or it is missing an executable to install.
ℹ 指令范围
The SKILL.md explicitly instructs the agent/user to create and read configuration files under ~/.config/llm-deploy, to run ssh to arbitrary hosts (including running nvidia-smi, lsof, and remote tmux sessions), and to invoke conda and vllm on the remote host. Those actions are within the stated deployment purpose, but they implicitly require access to SSH keys/config and will run arbitrary commands on remote machines — the instructions do not limit or warn about that beyond high-level notes.
✓ 安装机制
No install spec is present (instruction-only), which is the lowest-risk install mechanism. The README's copy-to-PATH suggestion is inconsistent with the package contents (no script provided). There's no remote download or archive extraction in the skill itself.
⚠ 凭证需求
The skill declares no required environment variables, which is consistent, but it implicitly depends on access to local SSH credentials (private keys and SSH agent/config) and will read/write ~/.config/llm-deploy/*. Those implicit credential/config accesses are not called out in metadata. Users should be aware SSH keys/agents will be used and that the skill will create config files in their home directory.
✓ 持久化与权限
always:false and no install effects are declared. The skill does instruct creating config files under the user's home directory, but it does not request system-wide changes or persistent elevated privileges. No changes to other skills or global agent config are specified.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.0.02026/3/13
llm-deploy v1.0.0 – 初始版本发布 - 支持在多 GPU 服务器上一键部署 vLLM 开源大语言模型服务 - 自动检查服务器 GPU 状态与端口占用 - 配置与切换多台服务器,管理流行及自定义模型 - 提供模型部署、服务进程查看、服务停止等常用命令 - 附有详细快速上手与手动操作说明
● 无害
安装命令 点击复制
官方npx clawhub@latest install llm-deploy
镜像加速npx clawhub@latest install llm-deploy --registry https://cn.clawhub-mirror.com
技能文档
在 GPU 服务器上快速部署 vLLM 模型服务。
✨ 功能特点
- 🖥️ 多服务器支持 - 配置多个 GPU 服务器,灵活选择
- 🔍 自动检查 - 一键检查 GPU 状态和端口占用
- 🤖 模型库 - 预置流行模型配置
- ⚡ 快速部署 - 简单命令即可启动服务
📋 快速开始
1. 配置服务器
创建 ~/.config/llm-deploy/servers.json:
{
"servers": {
"gpu1": {
"host": "gpu1",
"user": "lnsoft",
"gpu_count": 4,
"model_path": "/data/models/llm"
},
"my-gpu": {
"host": "192.168.1.100",
"user": "ubuntu",
"gpu_count": 2,
"model_path": "/home/ubuntu/models"
}
},
"default_server": "gpu1"
}
2. 检查服务器状态
# 使用默认服务器
llm-deploy check# 指定服务器
llm-deploy check --server gpu1
3. 部署模型
# 部署预设模型
llm-deploy deploy deepseek-r1-32b# 指定端口
llm-deploy deploy deepseek-r1-32b --port 8112
🎛️ 可用命令
check - 检查服务器状态
检查 GPU 显存和端口占用情况。
llm-deploy check [--server NAME] [--port PORT]
输出示例:
✅ GPU 状态正常
- 4 × Tesla T4 (15GB)
- 显存占用: 12.6GB/卡
- 温度: 51-55°C
✅ 端口 8111 可用
deploy - 部署模型
启动 vLLM 模型服务。
llm-deploy deploy [--server NAME] [--port PORT]
支持的模型:
deepseek-r1-32b- DeepSeek-R1-Distill-Qwen-32B-AWQllama-3-8b- Llama 3 8Bqwen-7b- Qwen 7Bmistral-7b- Mistral 7B
list - 列出可用模型
llm-deploy list
ps - 查看运行中的服务
llm-deploy ps [--server NAME]
stop - 停止服务
llm-deploy stop [--server NAME] [--port PORT]
🔧 手动使用(无脚本)
如果不想用封装脚本,也可以直接用原始命令:
检查 GPU
ssh @ nvidia-smi
检查端口
ssh @ "lsof -i : 2>/dev/null || echo '端口可用'"
部署模型(DeepSeek R1 32B)
ssh @ "tmux new-session -d -s vllm '
source /data/miniconda3/etc/profile.d/conda.sh && \
conda activate vllm && \
cd /data/models/llm && \
vllm serve /data/models/llm/deepseek/DeepSeek-R1-Distill-Qwen-32B-AWQ/ \
--tensor-parallel-size 4 \
--max-model-len 102400 \
--dtype half \
--port 8111 \
--served-model-name gpt-4o-mini
'"
📦 添加自定义模型
在 ~/.config/llm-deploy/models.json 中添加:
{
"my-model": {
"name": "My Awesome Model",
"path": "/path/to/model",
"tensor_parallel_size": 2,
"max_model_len": 8192,
"dtype": "half",
"port": 8111,
"served_model_name": "my-model"
}
}
⚠️ 注意事项
- 部署前检查 - 总是先运行
check确认资源可用 - 后台运行 - 建议使用 tmux/screen 保持服务运行
- 端口管理 - 不同模型使用不同端口
- 显存估算 - 7B 模型约需 8-10GB,32B 约需 10-14GB/卡
🔗 相关链接
- vLLM 文档: https://docs.vllm.ai
- 模型下载: https://huggingface.co/models
- 问题反馈: https://github.com/your-username/llm-deploy-skill
由 OpenClaw 社区贡献 🦞
数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制
免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制