📦 qqbot-stt — QQ语音转文字

v1.0.0

本地部署 Qwen3-ASR 模型,通过 HTTP 服务为 QQBot 提供高精度语音转文字能力,无需云端,低延迟高隐私。

0· 152·0 当前·0 累计
下载技能包
最后更新
2026/4/21
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
NULL
评估建议
Before installing or running this skill: - Treat it as suspicious until you verify sources: the SKILL.md and files refer to 'local-stt' while registry metadata says 'qqbot-stt' — verify you obtained the intended package. - Do not run the server or scripts as root. Run them in an isolated environment (dedicated user account or container/VM). - Inspect and confirm the file referenced by main.py (QWEN_ASR_SCRIPT = /Users/reks/.openclaw/skills/qwen-asr/scripts/main.py). Hard-coded absolute paths ar...
详细分析 ▾
用途与能力
The README / SKILL.md describe a 'local-stt' skill providing a Qwen3-ASR HTTP/CLI STT service for QQBot, which is coherent with the included code. However the package metadata says 'qqbot-stt' while files and instructions repeatedly refer to 'local-stt' (name mismatch). main.py hard-codes QWEN_ASR_SCRIPT to /Users/reks/.openclaw/skills/qwen-asr/scripts/main.py (a user-specific absolute path) which does not match the skill's own layout and is unexpected for a distributable skill.
指令范围
SKILL.md asks you to clone/install a different skill (local-stt) and edit ~/.openclaw/openclaw.json — that is expected — but it also presumes external toolchains (HuggingFace model downloads, ffmpeg) while requirements.txt does not list the model/ML libs. The instructions and code reference an env var MLX_ASR_MODEL and rely on external network/model downloads; SKILL.md and requires.env declare no required env vars, so the runtime assumptions are under-specified. The docs also encourage running openclaw gateway and grepping logs (harmless) but give the agent broad leeway to run system commands during setup.
安装机制
No install spec is provided (instruction-only), yet the package includes executable code files that must be run manually. requirements.txt only lists fastapi, uvicorn, python-multipart but the code imports mlx_qwen3_asr and uses model-serving libraries (transformers/torch) and ffmpeg; those dependencies are missing from requirements.txt. Running pip install -r requirements.txt will not install the packages actually required, and following the README will cause manual installs that fetch heavy third-party ML packages (and arbitrary code via 'trust_remote_code=True'). This increases risk because additional packages will be pulled from PyPI/HuggingFace at runtime.
凭证需求
The skill declares no required env vars, but server.py and transcribe.py read MLX_ASR_MODEL and code expects ffmpeg on PATH. There are no API keys requested, which is proportionate, but the hidden dependency on MLX_ASR_MODEL (and possible use of huggingface credentials when downloading models) is not surfaced in the metadata.
持久化与权限
always is false; the skill does not request forced inclusion or elevated platform privileges. It runs as a normal local service/CLI, so persistence/privilege requests are appropriate for its purpose.
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/3/20

NULL

无害

安装命令

点击复制
官方npx clawhub@latest install qqbot-stt
镜像加速npx clawhub@latest install qqbot-stt --registry https://cn.longxiaskill.com
数据来源ClawHub ↗ · 中文优化:龙虾技能库