📦 ROCm vLLM Deployment — AMD GPU部署

v1.0.0

在 AMD ROCm GPU 上实现生产级 vLLM 一键部署:自动检测环境与模型参数,Docker Compose 拉起服务,内置健康检查,确保大模型推理高效稳定上线。

2· 421·0 当前·0 累计
下载技能包
最后更新
2026/4/22
0
安全扫描
VirusTotal
可疑
查看报告
OpenClaw
可疑
medium confidence
NULL
评估建议
This skill appears to be a deployment helper and is not obviously malicious, but exercise caution before running it on a sensitive system. Specific actions to consider: 1) Inspect scripts locally (you already have them) before executing. 2) Avoid putting long-lived HF tokens in shell rc files; prefer ephemeral environment variables or a protected .env file with strict permissions. 3) Be aware check-env.sh will source ~/.bashrc (it will execute code from your rc file) and may create an empty ~/.b...
详细分析 ▾
用途与能力
The skill claims to prepare and report on vLLM deployments and includes two helper scripts that match that scope (environment check and report generation). However the registry metadata declares no required env vars while the SKILL.md and scripts clearly expect HF_TOKEN and HF_HOME (optional). There is also a small inconsistency: SKILL.md advises sourcing ~/.bash_profile but check-env.sh actually sources ~/.bashrc.
指令范围
check-env.sh sources the user's ~/.bashrc (executing arbitrary shell code from the user's rc file) and will create ~/.bashrc if missing. Both check-env.sh and generate-report.sh echo a truncated HF_TOKEN (first 10 characters) into stdout/logs and the generated report, which means sensitive token material can be written into deployment logs and DEPLOYMENT_REPORT.md under $HOME/vllm-compose/<model-id> — a potential secret-leakage risk. Aside from that, the scripts do not perform network calls or write to unexpected remote endpoints.
安装机制
Instruction-only skill with no install spec and no external downloads. The scripts live in the skill directory and nothing in the manifest creates or executes external installers — low install risk.
凭证需求
Requesting HF_TOKEN and HF_HOME is appropriate for interacting with HuggingFace models, but the skill/README/manifest mismatch (registry says no required env vars) is confusing. More importantly, the scripts log the token prefix and include token status in generated reports, which is disproportionate handling of a secret. The skill does not ask for unrelated credentials.
持久化与权限
The skill does not request elevated platform privileges or set always:true. It does write under $HOME/vllm-compose/<model-id>/ and will touch/create ~/.bashrc if absent — this is modest persistence in the user's home directory and should be expected for a deployment helper but is worth noting.
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/1

NULL

可疑

安装命令

点击复制
官方npx clawhub@latest install rocm-vllm-deployment
镜像加速npx clawhub@latest install rocm-vllm-deployment --registry https://cn.longxiaskill.com
数据来源ClawHub ↗ · 中文优化:龙虾技能库