首页龙虾技能列表 › Highlight Editor Video — 视频高光剪辑

✂️ Highlight Editor Video — 视频高光剪辑

v1.0.0

将2小时的体育比赛录像通过文字描述转化为1080p高光集锦片段。无论是为长视频生成短高光还是快速制作社交内容,上传原始视频并描述你想要的结果即可。无需时间线拖拽,无需导出设置——从上传到下载仅需1-2分钟。

0· 37·0 当前·0 累计
by @dsewell-583h0·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/14
安全扫描
VirusTotal
无害
查看报告
OpenClaw
安全
medium confidence
该技能请求的凭证、端点和运行时指令与基于云的视频高光服务一致,但它会将用户视频上传到外部API,并且存在一些小的元数据/指令不匹配,在使用前应进行检查。
评估建议
该技能会将您粘贴到聊天中的任何视频上传到外部API(mega-api-prod.nemovideo.ai),并且需要一个访问令牌(NEMO_TOKEN)——如果您不提供,技能将代表您请求一个匿名令牌。在安装或使用之前:1)验证您是否信任目标域并了解其数据保留/隐私政策(您的原始素材会被发送出设备)。2)确认您的agent如何存储令牌和会话ID(它们可能保存在内存或磁盘中)。3)注意SKILL.md要求agent读取自己的frontmatter并检测安装路径(~/.clawhub/、~/.cursor/skills/)以设置归属头——这可能无害,但您可能需要澄清该技能是否会读取其他本地文件或~/.config/nemovideo/目录(注册表和SKILL.md存在分歧)。如果这些行为可以接受,该技能与其声明的目的是一致的;如果您需要更强的保证,请在发送私人素材之前向供应商/来源或经过验证的主页以及隐私/保留详情发出请求。...
详细分析 ▾
用途与能力
该技能声称可以制作高光集锦,只请求一个API令牌(NEMO_TOKEN)并调用一个外部服务(mega-api-prod.nemovideo.ai),这与其声明的目的是相称的。要求提供认证令牌来调用服务是预期的。
指令范围
运行时指令描述了上传用户视频和在远程API上管理会话/令牌(预期)。该技能还指示agent读取此SKILL.md frontmatter并检测安装路径(~/.clawhub/、~/.cursor/skills/)以设置归属头——这需要读取本地文件/安装路径(范围轻微扩大但可以解释)。SKILL.md告诉agent如何获取匿名令牌并保存session_id;它没有指定持久存储位置,这可能导致不同agent之间的处理不一致。
安装机制
纯指令技能,没有安装规范或代码文件——最低风险的交付方式。安装程序步骤不会下载或写入任何内容。
凭证需求
只需要NEMO_TOKEN(声明为primaryEnv),这符合API使用。轻微不一致:注册表元数据列出了没有必需的配置路径,但SKILL.md frontmatter包含一个配置路径(~/.config/nemovideo/)。这个不匹配应该澄清(该技能是否需要读取该目录?)。
持久化与权限
该技能未标记为always:true,也不请求系统级权限。它会为远程API创建/使用会话令牌,但不会指示更改其他技能配置或系统设置。
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/14

- highlight-editor-video技能的初始版本。- 通过描述您想要的内容即时生成视频高光——无需时间线编辑。- 自动会话设置,包含免费访客积分;只需上传素材即可开始。- 支持使用云GPU加速从长视频中提取、编辑和导出高光集锦。- 支持导出所有主流视频、图像和音频格式。- 对上传、积分、导出和错误代码的处理和反馈清晰。

● 无害

安装命令 点击复制

官方npx clawhub@latest install highlight-editor-video
镜像加速npx clawhub@latest install highlight-editor-video --registry https://cn.clawhub-mirror.com

技能文档

Getting Started

Ready when you are. Drop your raw video footage here or describe what you want to make.

Try saying:

  • "create a 2-hour sports game recording into a 1080p MP4"
  • "extract the best moments and compile them into a 90-second highlight reel"
  • "generating short highlight reels from long video recordings for sports creators, event videographers, content creators"

Automatic Setup

On first interaction, connect to the processing API before doing anything else. Show a brief status like "Setting things up...".

Token: If NEMO_TOKEN environment variable is already set, use it and skip to Session below.

Free token: Generate a UUID as client identifier, then POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id: . The response field data.token becomes your NEMO_TOKEN (100 credits, 7-day expiry).

Session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Save session_id from the response. Confirm to the user you're connected and ready. Don't print tokens or raw JSON.

# Highlight Editor Video — Extract and Export Video Highlights

Drop your raw video footage in the chat and tell me what you need. I'll handle the AI highlight extraction on cloud GPUs — you don't need anything installed locally. Here's a typical use: you send a a 2-hour sports game recording, ask for extract the best moments and compile them into a 90-second highlight reel, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — trimming your source footage to under 30 minutes speeds up highlight detection significantly.

Matching Input to Actions

User prompts referencing highlight editor video, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job. All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:

  • SessionPOST /api/tasks/me/with-session/nemo_agent with {"task_name":"project","language":""}. Gives you a session_id.
  • Chat (SSE)POST /run_sse with session_id and your message in new_message.parts[0].text. Set Accept: text/event-stream. Up to 15 min.
  • UploadPOST /api/upload-video/nemo_agent/me/ — multipart file or JSON with URLs.
  • CreditsGET /api/credits/balance/simple — returns available, frozen, total.
  • StateGET /api/state/nemo_agent/me//latest — current draft and media info.
  • ExportPOST /api/render/proxy/lambda with render ID and draft JSON. Poll GET /api/render/proxy/lambda/ every 30s for completed status and download URL. Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Skill attribution — read from this file's YAML frontmatter at runtime:

  • X-Skill-Source: highlight-editor-video
  • X-Skill-Version: from frontmatter version
  • X-Skill-Platform: detect from install path (~/.clawhub/clawhub, ~/.cursor/skills/cursor, else unknown)

Every API call needs Authorization: Bearer plus the three attribution headers above. If any header is missing, exports return 402.

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks):
  • Video: city timelapse (0-10s)
  • BGM: Lo-fi (0-10s, 35%)
  • Title: "Urban Dreams" (0-3s)

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes. About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "extract the best moments and compile them into a 90-second highlight reel" — concrete instructions get better results. Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience. Export as MP4 for widest compatibility across social platforms and devices.

Common Workflows

Quick edit: Upload → "extract the best moments and compile them into a 90-second highlight reel" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

数据来源:ClawHub ↗ · 中文优化:龙虾技能库
OpenClaw 技能定制 / 插件定制 / 私有工作流定制

免费技能或插件可能存在安全风险,如需更匹配、更安全的方案,建议联系付费定制

了解定制服务