openclaw-macvoice
OpenClaw 插件,通过 voicecli 使用 macOS 原生语音 API 实现语音支持。
⚠️ 仅 macOS — 需要 macOS 13.0+,使用 Apple 框架 (SFSpeechRecognizer、AVSpeechSynthesizer)。
功能
- 🎙️ 语音转文本
- 🔊 以语音响应 — 文本转音频
- 🏠 原生 macOS — 使用
SFSpeechRecognizer 和 AVSpeechSynthesizer - ⚡ 快速 — 无云 API 调用,全部本地处理
前置条件
brew tap acwilan/voicecli brew install voicecli
安装
# 从 OpenClaw 技能目录 npm install openclaw-macvoice
使用
基本用法
配置
| 选项 | 类型 | 默认值 | 描述 |
voice | string | — | 语音标识符(参见 voicecli voices) |
rate | number | 0.5 | 语速(0.0-1.0) |
tempDir | string | os.tmpdir() | 临时音频文件目录 |
API
MacVoicePlugin
transcribe(audioPath: string): Promise<string>
将音频文件转换为文本。
speak(text: string, options?): Promise<string>
将文本转换为语音,返回生成的音频文件路径。
processVoiceMessage(audioPath, options)
组合方法:转录 + 可选响应语音。
平台支持
| 平台 | 状态 |
| macOS 13.0+ | ✅ 支持 |
| Linux | ❌ 不支持 |
| Windows | ❌ 不支持 |
许可
MIT
openclaw-macvoice
OpenClaw plugin for voice message support using native macOS speech APIs via voicecli.
⚠️ macOS only — This plugin requires macOS 13.0+ and uses native Apple frameworks (SFSpeechRecognizer, AVSpeechSynthesizer).
Features
- 🎙️ Transcribe voice messages to text
- 🔊 Respond with voice — convert text responses to audio
- 🏠 Native macOS — uses
SFSpeechRecognizer and AVSpeechSynthesizer
- ⚡ Fast — no cloud API calls, all on-device
Prerequisites
- macOS 13.0+ (required)
- voicecli installed:
brew tap acwilan/voicecli
brew install voicecli
Installation
# From OpenClaw skill directory
npm install openclaw-macvoice
Usage
Basic
import macvoice from 'openclaw-macvoice';
// Initialize
const plugin = await macvoice.init(ctx, {
voice: 'com.apple.voice.compact.en-US.Samantha',
rate: 0.5,
});
// Transcribe a voice message
const transcription = await plugin.transcribe('/path/to/audio.m4a');
console.log('User said:', transcription);
// Respond with voice
const audioPath = await plugin.speak('Hello, how can I help you?');
// Send audioPath as voice message
With Telegram Channel
// In your Telegram OpenClaw handler
import macvoice from 'openclaw-macvoice';
export default {
async onVoiceMessage(message, ctx) {
// Initialize if not already
if (!ctx.macvoice) {
await macvoice.init(ctx, { rate: 0.5 });
}
// Transcribe
const text = await ctx.macvoice.transcribe(message.audioPath);
// Get AI response (your existing logic)
const response = await ctx.llm.chat(text);
// Convert to voice
const responseAudio = await ctx.macvoice.speak(response);
// Send voice response
await ctx.telegram.sendVoice({
chat_id: message.chat_id,
voice: responseAudio,
});
},
};
Configuration
| Option | Type | Default | Description |
| `voice` | `string` | — | Voice identifier (see `voicecli voices`) |
| `rate` | `number` | `0.5` | Speech rate 0.0-1.0 |
| `tempDir` | `string` | `os.tmpdir()` | Directory for temporary audio files |
API
MacVoicePlugin
transcribe(audioPath: string): Promise<string>
Transcribe audio file to text.
speak(text: string, options?): Promise<string>
Convert text to speech. Returns path to generated audio file.
processVoiceMessage(audioPath, options)
Combined method: transcribe + optionally respond with voice.
Platform Support
| Platform | Status |
| macOS 13.0+ | ✅ Supported |
| Linux | ❌ Not supported |
| Windows | ❌ Not supported |
License
MIT