📦 Llama
v1.2.0本地 LLM 性能优化完整方法论。发现“上下文甜点”:减少 6% 上下文,速度提升 75%,质量零损失。
0· 23·0 当前·0 累计
下载技能包
最后更新
2026/4/26
安全扫描
OpenClaw
安全
medium confidence该技能是一份仅用于优化 llama.cpp/llama-server 运行时参数的指令指南;其需求与说明均围绕此目的展开,但包含一项潜在风险建议——将服务器绑定至 0.0.0.0,用户应谨慎对待。
评估建议
This guide appears to be a legitimate, self-contained methodology for tuning llama.cpp/llama-server. Before using it: 1) Do not run example server commands that bind to 0.0.0.0 on production or untrusted networks — prefer localhost or restrict access with a firewall / reverse proxy / auth. 2) mlock and very large ctx-size may require root privileges and can exhaust system resources; test on controlled hardware. 3) Use synthetic or non-sensitive prompts when benchmarking to avoid unintentionally ...详细分析 ▾
✓ 用途与能力
Name and description claim a methodology for local LLM performance optimization; SKILL.md contains step‑by‑step benchmarking instructions, parameter lists, and example launch commands. All requested actions (run tests, adjust ctx-size, batch size, flash-attn, kv quantization, etc.) are relevant to the stated purpose.
ℹ 指令范围
Instructions are detailed and stay within deployment and benchmarking scope. Notable caution: example launch commands include --host 0.0.0.0 and a public port (8080), which would expose the model server to the network — this is a security-sensitive deployment choice but is related to running llama-server. The doc also recommends mlock and wide ctx-size testing (resource intensive). The guide does not instruct reading unrelated files, exfiltrating data, or contacting external endpoints.
✓ 安装机制
Instruction-only skill with no install spec and no code files — nothing is downloaded or written by the skill itself. This is the lowest-risk install posture.
✓ 凭证需求
No environment variables, credentials, or config paths are requested. The demands in the guide (GPU, VRAM, CPU binding, large context sizes) are proportional to a performance‑tuning methodology and do not request extraneous secrets or unrelated service credentials.
✓ 持久化与权限
Skill is not always-enabled and does not request persistent privileges or attempt to modify other skills or system settings. Autonomous invocation is allowed by platform default but the skill itself doesn't request elevated or persistent agent-level privileges.
安全有层次,运行前请审查代码。
运行时依赖
无特殊依赖
版本
latestv1.2.02026/4/26
llama-params-optimizer v1.2.0 - 新增完整中英双语文档,全球可用。 - 优化描述与关键词,突出加速“甜蜜点”与通用性。 - 精简逐步调优流程,降低上手门槛。 - 展示真实案例:上下文减少 6%,速度提升 75%,质量零损失。 - 更新反直觉发现与缓解技巧,适配更多硬件与模型。
● 无害
安装命令
点击复制官方npx clawhub@latest install llama-params-optimizer
镜像加速npx clawhub@latest install llama-params-optimizer --registry https://cn.longxiaskill.com