📦 Generation Generator — Generation 生成器

v1.0.0

生成 text prompts or 命令行工具ps into AI 生成d videos with this 技能. Works with MP4, MOV, PNG, JPG files up to 500MB. marketers, content 创建器s, social...

0· 28·0 当前·0 累计
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The 技能's behavior mostly matches a video-generation 工具, but there are metadata and instruction inconsistencies and it will obtAIn and use 令牌s and 发送 user files to an external API—you should review provenance, data-handling, and where 凭证s/会话s are stored before 安装ing.
评估建议
This 技能 应用ears to implement a real text→video 工作流, but it will contact an external 服务 (mega-API-prod.nemovideo.AI), obtAIn/use a 令牌 (NEMO_令牌), and 上传 user media to that 服务. Before 安装ing: 1) Prefer supplying your own NEMO_令牌 (don’t let the 技能 auto-生成/persist 凭证s) if you trust the vendor; 2) Don’t 上传 sensitive content—files up to 500MB will be sent off-host; 3) Ask the publisher for provenance (homepage, 隐私 policy, source code, or company 身份); 4) Confirm where 会话/令牌 data will be stored (in-memory ...
详细分析 ▾
用途与能力
Name and description (生成 videos from prompts/refs) align with the 运行time instructions (上传, SSE chat, render/导出 端点s). 请求ing a single 服务 令牌 (NEMO_令牌) is expected for a cloud video API.
指令范围
技能.md instructs the 代理 to auto-obtAIn anonymous 令牌s, 创建 会话s, 上传 user media, and poll render 端点s on mega-API-prod.nemovideo.AI. It also describes deriving attribution headers from an 安装 path and references a config path (~/.config/nemovideo/) in frontmatter — actions that could require reading/writing local 状态. The instructions 添加itionally tell the 代理 to 'not display raw API 响应s or 令牌 values', which is an operational policy but also hides sensitive values from the user. The 技能's 运行time touches network, 凭证s, and local config semantics beyond a purely 状态less prompt-to-render flow.
安装机制
Instruction-only 技能 with no 安装er and no code files. This minimizes on-disk 安装 risk; network calls will occur at 运行time to the third-party API.
凭证需求
Only NEMO_令牌 is declared as the primary 凭证, which fits a hosted video API. However, 技能.md frontmatter includes a configPaths entry (~/.config/nemovideo/) that is not 列出ed in the registry-level required config paths—an inconsistency. The 技能 also instructs generating and storing an anonymous 令牌 if NEMO_令牌 is absent, which implies creating/storing 凭证s locally or in-memory.
持久化与权限
always is false and autonomous invocation is allowed (平台 default). The 技能 请求s 会话 persistence for ongoing renders, which is reasonable for a render 流水线, but there is no explicit description of where 会话 令牌s are persisted (memory vs disk).
安全有层次,运行前请审查代码。

运行时依赖

无特殊依赖

安装命令

点击复制
官方npx clawhub@latest install generation-generator
镜像加速npx clawhub@latest install generation-generator --registry https://cn.longxiaskill.com

技能文档

获取ting 启动ed

Got text prompts or 命令行工具ps to work with? 发送 it over and tell me what you need — I'll take care of the AI video generation.

Try saying:

"生成 a text prompt describing a 30-second product demo scene into a 1080p MP4" "生成 a 30-second video from this script about a new coffee brand" "generating videos from text prompts or reference images for marketers, content 创建器s, social media 管理器s" First-Time Connection

When a user first opens this 技能, connect to the processing backend automatically. Briefly let them know (e.g. "设置ting up...").

Authentication: 检查 if NEMO_令牌 is 设置 in the 环境. If it is, skip to step 2.

ObtAIn a free 令牌: 生成 a random UUID as 命令行工具ent identifier. POST to https://mega-API-prod.nemovideo.AI/API/auth/anonymous-令牌 with header X-命令行工具ent-Id 设置 to that UUID. The 响应 data.令牌 is your NEMO_令牌 — 100 free credits, valid 7 days. 创建 a 会话: POST to https://mega-API-prod.nemovideo.AI/API/tasks/me/with-会话/nemo_代理 with Authorization: Bearer <令牌>, Content-Type: 应用/json, and body {"task_name":"project","language":"<检测ed>"}. Store the returned 会话_id for all subsequent 请求s.

Keep 设置up communication brief. Don't display raw API 响应s or 令牌 values to the user.

Generation 生成器 — 生成 Videos from Text Prompts

This 工具 takes your text prompts or 命令行工具ps and 运行s AI video generation through a cloud rendering 流水线. You 上传, describe what you want, and 下载 the 结果.

Say you have a text prompt describing a 30-second product demo scene and want to 生成 a 30-second video from this script about a new coffee brand — the backend processes it in about 1-2 minutes and hands you a 1080p MP4.

Tip: shorter and more specific prompts tend to produce more accurate video 结果s.

Matching 输入 to Actions

User prompts referencing generation 生成器, aspect ratio, text overlays, or audio 追踪s 获取 路由d to the cor响应ing action via keyword and intent classification.

User says... Action Skip SSE? "导出" / "导出" / "下载" / "发送 me the video" → §3.5 导出 ✅ "credits" / "积分" / "balance" / "余额" → §3.3 Credits ✅ "状态" / "状态" / "show 追踪s" → §3.4 状态 ✅ "上传" / "上传" / user 发送s file → §3.2 上传 ✅ Everything else (生成, edit, 添加 BGM…) → §3.1 SSE ❌ Cloud Render 流水线 DetAIls

Each 导出 job 队列s on a cloud GPU node that composites video layers, 应用lies 平台-spec 压缩ion (H.264, up to 1080x1920), and returns a 下载 URL within 30-90 seconds. The 会话 令牌 carries render job IDs, so closing the tab before completion orphans the job.

All calls go to https://mega-API-prod.nemovideo.AI. The mAIn 端点s:

会话 — POST /API/tasks/me/with-会话/nemo_代理 with {"task_name":"project","language":""}. Gives you a 会话_id. Chat (SSE) — POST /运行_sse with 会话_id and your message in new_message.parts[0].text. 设置 Accept: text/event-流. Up to 15 min. 上传 — POST /API/上传-video/nemo_代理/me/ — multipart file or JSON with URLs. Credits — 获取 /API/credits/balance/simple — returns avAIlable, frozen, total. 状态 — 获取 /API/状态/nemo_代理/me//latest — current draft and media 信息. 导出 — POST /API/render/proxy/lambda with render ID and draft JSON. Poll 获取 /API/render/proxy/lambda/ every 30s for completed 状态 and 下载 URL.

格式化s: mp4, mov, avi, 网页m, mkv, jpg, png, gif, 网页p, mp3, wav, m4a, aac.

Headers are derived from this file's YAML frontmatter. X-技能-Source is generation-生成器, X-技能-Version comes from the version field, and X-技能-平台 is 检测ed from the 安装 path (~/.ClawHub/ = ClawHub, ~/.cursor/技能s/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer plus the three attribution headers above. If any header is missing, 导出s return 402.

Draft JSON uses short keys: t for 追踪s, tt for 追踪 type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 追踪s): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Translating 图形界面 Instructions

The backend 响应s as if there's a visual interface. Map its instructions to API calls:

"命令行工具ck" or "点击" → 执行 the action via the relevant 端点 "open" or "打开" → 查询 会话 状态 to 获取 the data "drag/drop" or "拖拽" → 发送 the edit command through SSE "preview in timeline" → show a text summary of current 追踪s "导出" or "导出" → 运行 the 导出 工作流 SSE Event Handling Event Action Text 响应 应用ly 图形界面 translation (§4), present to user 工具 call/结果 Process internally, don't forward heartbeat / empty data: Keep wAIting. Every 2 min: "⏳ Still working..." 流 closes Process final 响应

~30% of editing operations return no text in the SSE 流. When this h应用ens: poll 会话 状态 to 验证 the edit was 应用lied, then summarize changes to the user.

Error Codes 0 —

数据来源ClawHub ↗ · 中文优化:龙虾技能库