💃 Ai Image To Video Dance — 技能工具

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — animate this photo so the person is dancing to the uploaded beat — and get...

0· 43·0 当前·0 累计
dsewell-583h0 头像by @dsewell-583h0·MIT-0
下载技能包
License
MIT-0
最后更新
2026/4/14
0
安全扫描
VirusTotal
无害
查看报告
OpenClaw
可疑
medium confidence
The skill's declared purpose (animate photos into dance videos) matches most of its instructions, but there are inconsistencies and a few runtime actions (anonymous token creation, multipart uploads from arbitrary file paths, and a referenced config path) that warrant caution before installing or giving it filesystem/network access.
评估建议
This skill generally does what it claims (upload a photo, call a cloud API, and return an animated video) and requests only a NEMO_TOKEN. Before installing or enabling it: - Remember this skill sends images (and potentially any file path the agent can read) to https://mega-api-prod.nemovideo.ai — do not use it with sensitive images or files you don't want uploaded. - The skill will create or use a NEMO_TOKEN; if you prefer control, provide your own token rather than letting the skill auto-cre...
详细分析 ▾
用途与能力
The skill claims to animate images into dance videos and requires a single service token (NEMO_TOKEN) and cloud API calls — that fits the stated purpose. However, the SKILL.md frontmatter references a local config path (~/.config/nemovideo/) while the registry metadata earlier listed no required config paths, creating an inconsistency about whether the skill expects to store or read local config.
指令范围
The runtime instructions instruct the agent to check the environment for NEMO_TOKEN and, if missing, call an external API to obtain an anonymous token. Upload instructions allow multipart file uploads using local file paths (e.g., -F "files=@/path"), which implies the agent might read arbitrary local files if it has filesystem access. There are no explicit guardrails in the instructions to restrict uploads to user-provided images only, so a misconfigured agent could inadvertently upload other local files.
安装机制
This is an instruction-only skill with no install spec and no code files — nothing is written to disk by a package installer. That minimizes install-time risk.
凭证需求
Only one environment variable is required (NEMO_TOKEN), which is appropriate for a cloud API client. The skill also describes obtaining an anonymous NEMO_TOKEN by POSTing to an external endpoint; that behaviour is consistent with providing temporary credentials but may result in tokens being created and (per frontmatter) possibly stored under ~/.config/nemovideo/. The registry metadata conflictingly listed no config paths, so it's unclear whether the skill will persist tokens locally.
持久化与权限
always is false and the skill does not request elevated platform privileges. It does not instruct modifying other skills or global agent settings. Persistent storage is implied (config path) but not explicitly required in the registry metadata.
安全有层次,运行前请审查代码。

License

MIT-0

可自由使用、修改和再分发,无需署名。

运行时依赖

无特殊依赖

版本

latestv1.0.02026/4/14

Initial release of AI Image to Video Dance: - Instantly animates still images into dance videos using cloud AI — no local setup required. - Supports JPG, PNG, WEBP, HEIC uploads (up to 200MB). - Automatic token and session management with clear status updates. - Exports 1080p MP4 dance clips, optimized for social media. - Guides users with simple prompts; handles credit status and error messaging seamlessly.

无害

安装命令

点击复制
官方npx clawhub@latest install ai-image-to-video-dance
镜像加速npx clawhub@latest install ai-image-to-video-dance --registry https://cn.longxiaskill.com

技能文档

Getting Started

Share your still images and I'll get started on AI dance video generation. Or just tell me what you're thinking.

Try saying:

  • "animate my still images"
  • "export 1080p MP4"
  • "animate this photo so the person"

Quick Start Setup

This skill connects to a cloud processing backend. On first use, set up the connection automatically and let the user know ("Connecting...").

Token check: Look for NEMO_TOKEN in the environment. If found, skip to session creation. Otherwise:

  • Generate a UUID as client identifier
  • POST https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with X-Client-Id header
  • Extract data.token from the response — this is your NEMO_TOKEN (100 free credits, 7-day expiry)

Session: POST https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer auth and body {"task_name":"project"}. Keep the returned session_id for all operations.

Let the user know with a brief "Ready!" when setup is complete. Don't expose tokens or raw API output.

# AI Image to Video Dance — Animate Photos Into Dance Videos

Drop your still images in the chat and tell me what you need. I'll handle the AI dance video generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a single portrait photo of a person, ask for animate this photo so the person is dancing to the uploaded beat, and about 1-2 minutes later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — photos with a clear single subject and plain background produce the most realistic dance animations.

Matching Input to Actions

User prompts referencing ai image to video dance, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceai-image-to-video-dance
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path
All requests must include: Authorization: Bearer , X-Skill-Source, X-Skill-Version, X-Skill-Platform. Missing attribution headers will cause export to fail with 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":""} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"","new_message":{"parts":[{"text":""}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/ — file: multipart -F "files=@/path", or URL: {"urls":[""],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me//latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_","sessionId":"","draft":,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/ every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response
~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Backend Response Translation

The backend assumes a GUI exists. Translate these into API actions:

Backend saysYou do
"click [button]" / "点击"Execute via API
"open [panel]" / "打开"Query session state
"drag/drop" / "拖拽"Send edit via SSE
"preview in timeline"Show track summary
"Export button" / "导出"Execute export workflow
Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Handling

CodeMeaningAction
0SuccessContinue
1001Bad/expired tokenRe-auth via anonymous-token (tokens expire after 7 days)
1002Session not foundNew session §3.0
2001No creditsAnonymous: show registration URL with ?bind= (get from create-session or state response when needed). Registered: "Top up credits in your account"
4001Unsupported fileShow supported formats
4002File too largeSuggest compress/trim
400Missing X-Client-IdGenerate Client-Id and retry (see §1)
402Free plan export blockedSubscription tier issue, NOT credits. "Register or upgrade your plan to unlock export."
429Rate limit (1 token/client/7 days)Retry in 30s once

Common Workflows

Quick edit: Upload → "animate this photo so the person is dancing to the uploaded beat" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "animate this photo so the person is dancing to the uploaded beat" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, HEIC for the smoothest experience.

Export as MP4 for widest compatibility across TikTok, Instagram, and YouTube Shorts.

数据来源ClawHub ↗ · 中文优化:龙虾技能库