📦 Funasr Transcribe - 会议录音转写

v1.3.0

当用户要求“转录会议”、“转录音频”、“转录会议录音”、“将音频转换为文本”、“生成……”时,应使用 TranscribeThis 技能。

0· 18·0 当前·0 累计
下载技能包
最后更新
2026/4/21
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill largely does what it says (meeting/podcast transcription) but contains several mismatches and risky behaviors (undeclared credential use, automatic installation and patching of site-packages, and potential external LLM calls) that you should review before running.
评估建议
This skill implements the advertised transcription features but includes several behaviors you should consider before installing/running: - Credential exposure: The scripts can call AWS Bedrock, Anthropic, or OpenAI-compatible endpoints and will send transcript excerpts to those services when 'LLM cleanup' is enabled. Provide API keys/credentials only to the providers you intend to use, and review call destinations in scripts/llm_utils.py. If you don't want external model calls, run with --ski...
详细分析 ▾
用途与能力
Name/description match the included code: the scripts implement multi-speaker transcription, diarization, hotwords, and optional LLM cleanup. Required binaries (python3, ffmpeg) are appropriate. However, the SKILL.md and scripts rely on cloud LLM providers (AWS Bedrock, Anthropic, OpenAI-compatible) but the skill declares no required environment variables/credentials—this is an inconsistency (LLM usage normally requires API keys/credentials).
指令范围
Runtime instructions tell the agent to run setup_env.sh and python scripts which: (a) install system packages (apt-get/brew), (b) create/activate a venv and pip-install torch, funasr, modelscope, boto3, (c) auto-download models from ModelScope on first run, and (d) call LLMs with transcripts (call_llm). The SKILL.md also instructs writing/using hotwords, speaker-context and reference files. The instructions therefore read, process, and transmit transcript chunks to external LLM providers if LLM cleanup is enabled; that behavior is within the stated purpose but involves transmitting potentially sensitive audio/text externally and is not reflected in declared env/config requirements.
安装机制
This skill is 'instruction-only' in the registry but ships with a setup script that performs network installs (pip, possibly apt/brew) and auto-downloads models. Additionally it includes patch_clustering.py which directly edits FunASR files in site-packages (modifies another package's code). There is no install spec in the registry, so these writes/edits are only documented in SKILL.md/scripts — a user might not expect the skill to modify installed packages or require sudo/network access.
凭证需求
The registry declares no required environment variables, yet the code clearly expects credentials/config for LLM providers: llm_utils can call AWS Bedrock (boto3), Anthropic, or OpenAI APIs. SKILL.md explicitly says 'Prerequisites for LLM cleanup: AWS credentials with Bedrock InvokeModel permission' but does not surface required env vars in the skill manifest. This mismatch could lead to accidental use of available credentials or confusion about where to provide them. The LLM calls will transmit transcript excerpts to external services when enabled.
持久化与权限
always:false (good) and the skill is not force-included. However, the patch_clustering script writes into site-packages (funasr's cluster_backend.py) which modifies third-party library code on the host—this is a system-wide modification outside the skill's own directory. While the patch's purpose (performance for long meetings) is explained, modifying installed packages is a high-impact action and should be made explicit to the user.
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

版本

latestv1.3.02026/4/21

- Expanded language support: now handles Chinese, English, Japanese, Korean, Cantonese, and 99 languages (via Whisper), with automatic speaker diarization and hotword biasing. - New, detailed workflow: guides users to provide context like meeting type, participant names, supporting documents, and preference for language and number of speakers to optimize transcription quality. - Enhanced presets and diarization: per-language model selection with clear caveats on diarization support, especially for `auto` and `whisper` modes. - LLM optional cleanup: supports post-processing transcripts with Bedrock, Anthropic, or OpenAI-compatible LLMs, with resume and skip options. - Utility scripts included: speaker verification and reassignment script helps detect and fix swapped or misidentified speakers. - Audio preprocessing improvements: all inputs auto-converted to 16kHz mono FLAC for reliability, with detailed format recommendations.

无害

安装命令

点击复制
官方npx clawhub@latest install zxkane-audio-transcriber-funasr
镜像加速npx clawhub@latest install zxkane-audio-transcriber-funasr --registry https://cn.longxiaskill.com
数据来源ClawHub ↗ · 中文优化:龙虾技能库